Posts Tagged ‘cloud computing strategy’

Is the Cloud for Everything?

Recently, Indu Kodukula, SunGard EVP and CTO, was interviewed by Smart Business Philadelphia.  Here are a few of his remarks.  – CM

The #1 reason companies want to use the cloud for their ap­plications is to align their spending with busi­ness value.  Companies don’t know up-front what business return they would receive from a capital investment in enterprise IT.  Without the cloud, they have to make the invest­ment anyway and hope it is profitable.

Using the cloud makes a fundamental difference, because you only pay for the compute resources you use or the data you store.  You don’t have hardware to buy or install and, in a man­aged environment, you don’t need internal re­sources to manage your IT.  The service provider takes responsibility for maintaining the software, servers and applications.

As a result, companies utilizing the cloud for enterprise IT can make investments that are automatically in line with the business value.  Then, they can invest more capital into infrastructure and re­sources as the business becomes more successful.

Companies typically walk through several points when making the deci­sion to use the cloud.  First, the moment something moves outside your fire­wall, you don’t own it anymore.  So you have to decide what to keep in-house and what to move to the cloud.  Second, you must consider performance and availability of data in the cloud.  In the cloud, multisite availability is used for applications that (1) can tolerate only about four hours of downtime a year, (2) need geographic redundancy, or (3) are respon­sible for keeping the business up and running

How can businesses get started?

The first step toward moving applications to the cloud is to do a virtualization assess­ment.  Then, determine which applications to virtualize.  Next, take the virtualized applications and decide what to keep in house and what to move outside your firewall.

Look for a cloud service provider that will guide you through the process, helping you understand and decide which applications should stay in house—either because they are not ready to be virtualized or they are too tied into business—and which applications can be moved safely.  The goal is to create a roadmap for moving applications to the cloud data center.

Which applications are good fits for the cloud?

If you have an application that supports your business and has such strong growth that it will need 10 times more resources next year than it does today, the elasticity the cloud offers is a great option.  If the applica­tion also uses modern technology, which is easier to virtualize, that combination makes it compelling to move that application to cloud.  Obviously, the business argument for moving older technology, like ERP, to the cloud is much less strong.

Is your company taking steps to determine how it can benefit from the cost savings of an enterprise cloud?

Download SunGard’s white paper, The Real Value of Cloud Computing.

Are More Applications Mission-Critical Than Your Realize?

Some applications are obviously mission-critical—the website of an e-tailer or the ATM machine at a bank.  However, the criticality of some application can go unrecognized unless you do a systematic qualify of each application.

To qualify applications, check these metrics for each:

  • Recovery point objective – how much data loss is tolerable?  All of today’s data entries?  The entire database, because restoring the database is easy?
  • Recovery time objective – how long can the business go without access to the application before customer service, sales, accounting, etc., suffer?  How much data can be rebuilt and verified inside that time window—a few day’s worth, a few hour’s worth?
  • Recovery resources – what space, equipment and staffing are needed to replicate the data?  Would those resources be available if other mission-critical systems were down, too?  If not, how much

Once each application is evaluated, determine whether all the mission-critical applications can be recovered simultaneously, as would be needed with a data center incident caused by a flood, hurricane or tornado.  If the recovery requirements exceed current equipment, network and staff resources, consideration of a cloud-based recovery solution is in order.

Cloud-based recovery solutions offer access to low-cost or pay-as-you-use recovery infrastructure.  They can be provisioned on demand in the wake of failure events, with sufficient security and guaranteed performance.

Could unrecognized mission-critical application be lurking in your data center?

Visit our Cloud Solutions Center for videos, white papers and case studies about SunGard’s Enterprise Cloud Services.

Recovery in the Cloud – Part I, CEO Decision Drivers

Ram Shanmugan, our  Senior Director of Product Management for Recovery Services, was recently interviewed by Smart Business Philly magazine.  Below are some of the important points he discussed.  We’ll have more next week.  – Carl M.

“Weathering a storm” is more than just an off-hand comment these day. The U.S. experienced eight disasters costing over $1B in the first 6-months of 2011.  Few areas of the U.S were shared the business complications caused by tornado, blizzard, wildfires and floods.

Planning for erratic weather can be tricky.  Of course, you want secure data, redundant infrastructure and business continuity processes, but balancing those needs against the needs for revenue-generating IT projects is difficult.

Fortunately, “recovery in the  cloud” offers a cost-effective, reliable option.  It lets you formulate the right availability service for your applications, from mission-critical to important but infrequently used applications.

Four elements drive the decision to move to a cloud-based recovery service:

  1. Cost savings.  The ability to fulfill recovery needs and lower costs is the most significant driver,
  2. RPO/RTO.  The Recovery point objectives (how long you can tolerate an application being down) and the recovery time objectives (how long it takes to recover the application) determine the level of resources your need to avoid serious impact to your business.
  3. Reliability. The true value of a recovery environment comes during a time of disaster, and managed cloud-based solutions offer higher reliability in recovery of mission-critical applications than do in-house solutions.
  4. Skilled Resources.  In-house recovery solutions require an investment in specialized skills to support the recovery infrastructure.  Cloud-based recovery eliminates that need.

Can your IT department recover from an outage without incurring emergency resources and costs?

Visit our Cloud Solutions Center for videos, white papers and case studies about SunGard’s Enterprise Cloud Services.

The Cloud and the Availability Continuum – PART 2

Like dedicated hosting, cloud computing has to address availability.  Continued cloud outages, and the corresponding publicity, remind us of the importance of resiliency and availability.  One of the major benefits of cloud computing is scalability and efficiency of multi-tenant infrastructure.  However, even cloud infrastructures have to run in a physical data center somewhere, bringing us back to the critical nature of infrastructure availability.

Fortunately, the same availability you are accustomed to as part of a dedicated environment can be found in cloud computing.  Availability can be viewed in a continuum that ranges from high availability to failover and recovery, with many nuances in-between.  This continuum of  availability enables clouds to fulfill enterprise application and business needs at many different price points.

Platform Resiliency for Continuous Uptime

The first area to address availability is the resiliency of the platform itself.  Businesses requiring enterprise-class infrastructure need to look under the hood to determine how the infrastructure is architected and how resiliency is addressed.  A highly resilient environment should automatically
detect and address the failure of a system component—whether it is a server, network, a full blade or the VM —to quickly shift to a redundant component in order to keep the application running in the current site.

Failover

Failover is the capability to switch to a redundant or standby computer server, system, or network upon the failure or interruption of the primary environment.  Cloud computing has allowed failover practices to become less reliant on physical hardware and therefore more
available and less costly.  Service providers vary in the type of fail-over they provide as well as the time to respond, depending on the customers’ RPO and RTO needs.

A failover, or warm failover can be used for applications that require slightly less than real-time (e.g. hours VS. seconds).  In warm failover, a second site stands ready to be activated and made current as quickly as required.  Depending on the need, the time to failover depends on the Customer’s recovery time objective.  Sometimes the options can include the secondary site begin brought on line using a previous copy of the primary site.  Usually the copy is from the previous day, but it can be older depending on the business need.

High Availability for Mission-critical Apps

High availability addresses mission-critical production systems that require immediate, continuous, 24/7 access to data.  More technically, it means data must be duplicated at another location, usually in a different geographic area.   Essentially you are renting resources at one location and identical resources at another location, so costs are higher.

The communication method used between the systems also affects availability and costs.  Synchronous near real-time communication  pdates data from the primary system immediately  to the secondary system.  The secondary system mirrors the first and is ready to go into operation if the first system fails for any reason.

Asynchronous communications is where data waits in queue until the second system is free to accept it, so by its nature is less real-time.  Again, the business need determines which communications method is better.
Recovery for Availability

Recovery represents the other end of the availability continuum.  Cloud computing is changing the disaster recovery landscape.  The scalability and
flexibility of cloud computing platforms enable higher application availability.  Recovery can be used as a back-up to a production system already in the cloud or as a recovery solution to  another data center.  Further, the back-up can be on-line, ready to operate at the cloud site (like a warm failover) or off-line at a cloud site, as done in traditional recovery scenarios, since the cloud is a cost-effective recovery site for legacy systems.

As is obvious, different applications require different levels of availability, and applications should not be shoehorned into a “one size fits all” cloud
environment.  The best cloud providers will work closely with you to understand the business requirements of your business  applications  and devise the appropriate level of availability for each application you want to move to the cloud, along with any need for cloud resources to facilitate recovery of applications you do not move to the cloud.

Click here to view the SunGard Recover2Cloud Overview

The Cloud and its Continuum of Availability -PART 1

One of the major benefits of  cloud computing is availability and that availability comes in a continuum that ranges from high availability to high resilient, warm failover, failover and recoverable, with many nuances in-between.   This continuum of availability enables clouds to fulfill  application and business needs at many different price points.

High Availability for Mission-critical Apps

High availability is used for mission-critical production systems that require immediate, continuous, 24/7 access to data.  More technically, it means data must be duplicated at another location, usually in a different geographic area.   Essentially you are renting resources at one location and identical resources at another location, so costs are higher.

The communication method used between the systems also affects availability and costs.  Synchronous communication replicates the data in near real-time.  That is, data from the first system immediately updates the second system.  The second system mirrors the first and is ready to go into operation if the first system fails for any reason.

Asynchronous communications sends data from the first system to the second, where it waits in queue until the second system is free to accept it.  Again, the business need determines which communications method is better.

High Resiliency for Continuous Uptime

High resiliency is used for applications that do not require high availability.  In a highly resilient environment, automatic systems detect the failure of a system component—whether it is a server, a full blade or the VM software—to quickly shift to an alternate component to keep the application running in the current site.

Warm failover and failover are used for less critical applications.  In warm failover, a second site stands ready to be activated and made current as quickly as possible.  In failover, a second site is brought up using a previous copy of the primary site.  Usually the copy is from the previous day, but it can be old depending on the business need.

Recovery for Back-up.

Recovery represents the other end of the continuum.  Recovery is used as a back-up to a production system already in the cloud or as a back-up to another data center.  Further, the back-up can be on-line, ready to operate at the cloud site (like a warm failover) or off-line at a cloud site, as done in traditional recovery scenarios, since the cloud is a cost-effective recovery site for legacy systems.

As is obvious, different applications require different levels of available, and applications should not be shoehorned into a “one size fits all” cloud environment.  The best cloud providers will work closely with you to  understand the importance of your applications to your business and devise the appropriate level of availability for each application you move to the cloud, along with any need for cloud resources to facilitate recovery of applications you do not move to the cloud.

How does the continuum of availability fit with your move to the cloud?

Visit our Cloud Solutions Center for videos, white papers and case studies about SunGard’s Enterprise Cloud Services.

Multi-site Options Allay High Availability, Recovery and Interconnectivity Concerns

Organizations moving essential business applications to the cloud are often concerned that they will gain cost-efficiency and on-demand capacity but loss application availability.  Given the importance of production applications to the continuity of your businesses, those concerns are legitimate.

Fortunately, new capabilities being added to our Enterprise Cloud Services address those concerns.  Today, we are making high availability (at the 99.95 level) part of our Enterprise Cloud Services and including that commitment in our standard Service Level Agreement (SLA).  In doing so, we are going beyond the norms for the cloud computing industry.

Our high availability commitment is possible because of enhancements to our fully redundant architecture.  It now utilizes two geographically diverse production sites integrated with recovery capabilities.  These enhancements afford seamless cloud services continuity and greater availability assurances for your applications.

In addition, we have added a new option for cloud applications that do not require high availability: Managed Multi-Site Recovery.  With this option, a secondary cloud site becomes available for recovery within four hours of an outage at your primary cloud site.  That four hour recovery time objective is backed by your SLA, too.

Because more and more organizations operate in the hybrid world of cloud, co-location and managed services, we are now offering the ability to interconnect applications running on our Enterprise Cloud Services with other environments hosted in our data center(s).  This connectivity can be done within the same site or between multiple sites.  That means data from your legacy environments can be shared easily with your cloud-based applications to maximize business value.

Finally, we now provide active management for Microsoft Exchange Server, Microsoft Active Directory and Hosted Blackberry Services to reduce your IT administration burdens and help ensure production workloads are available

Hybrid Clouds — Use Cases and Considerations

Hybrid clouds are becoming more popular as companies seek to optimize their applications and data based on risk, architecture and business growth.  As a result, hybrid clouds are taking several different forms.

The Cloud as Partner

The most common hybrid cloud scenario is one in which a set of applications resides in the cloud with the remaining applications residing in the company’s on-site data center.  This arrangement enables the company to take advantage of the flexibility and cost-savings of the cloud where appropriate, while keeping control over more sensitive applications.

The Cloud as Proving Ground

Another use of a cloud is temporary workspace.  For example, developers can load an application into the cloud, add and test new features without affecting day-to-day operations.  Similarly, they can set-up a newly purchased application, say an ERP or document management system, run it, build it out and size it before moving it in-house.

The Cloud as Extra Capacity

Some companies use a cloud for burst capacity, letting sudden spikes in traffic call into action the additional resources of the cloud to ensure continuity of service.  In other cases, companies mirror their applications in a cloud to provide a hot, stand-by site.

Hybrid Cloud Considerations

Regardless of the type of hybrid cloud your company implements, certain considerations come into play, especially these.

Network connectivity. You will need to consider your connection, bandwidth, firewall requirements and how changes and upgrades will be handled between your on-site data center and the cloud.  You will probably need a Virtual Private Network (VPN) connection to the cloud to provide the level of security your company needs.  Today VPNs typically come in two types. Internet Protocol Security (IPsec) authenticates and encrypts data over the public internet, while Multi-protocol Label Switching (MPLS) VPNs are offered by carriers to provide companies with more secure, but still shared, private IP networks.

User Access. If you are using a Windows or Linux-based cloud, your user identification and authentication can remain the same, but you will need to take into account the fact that your cloud vendor may also have access to the severs they are operating for you.  Consequently, you will need to ensure that your vendor follows access policies that are acceptable to your auditor.

Data Migration.  For small applications, you can transmit your application and data over the network.  However, network transmission is too slow and lengthy for large application, so burning a disk and over-nighting it to the vendor is faster and more efficient for large data sets.

Your hybrid cloud strategy should support your business strategy.   Formulating the right cloud strategy can not only cut costs, but also bring the operational efficiencies and extra capacity your company needs to expand and grow.

How might your company initiate and evolve a cloud strategy?

For more information, visit our Cloud microsite

 

 

Five Considerations When Evaluating Cloud Computing Architectures

An excellent starting point for an organization looking at cloud computing platforms is to examine its IT architecture.  Only by aligning the architecture – compute, network, data center, power and storage resources – with applications can a company be on the path to achieve the reliability and performance it requires within a cloud environment.

In cloud computing, true protection is an outcome of the right architecture for the right application.  Organizations need to fully understand their individual application requirements and, if using a cloud platform, the corresponding cloud architecture.  With that knowledge, they can make informed decisions about what cloud platform best meets the reliability and performance requirements of their specific applications.”

Here are five considerations for companies looking at cloud computing architectures.  

Availability.  Not all applications are created equal, nor are all cloud platforms the same.  Organizations need to tier their applications, identifying which applications need to be highly available, which can accept downtime and how much downtime is acceptable.  They need to understand the business risk associated with a lack of availability of their data.  For those applications that need to be highly available, businesses should consider enterprise-class technologies that have been rigorously tested versus looking at building something internally. It’s also important to look at multi-site solutions and disaster recovery/business continuity planning.  For most businesses, this means working with a service provider or consultant because they usually have access to greater levels of expertise and provide these services as their core business.

Security.  Security is still the primary concern for businesses regarding the cloud.  Concerns include the loss of control of their sensitive data, the risks associated with a multi-tenant environment, and how to address standards and compliance.  Organizations need to know how a shared, multi-tenant environment is segmented to prevent customer overlap.  How is the solution architected and is the service provider’s cloud infrastructure – network, virtualization and storage platforms – secure?  

Manageability.  Businesses need to understand what they are accountable for versus what they expect from a service provider.  Most public cloud vendors do not provide administrative support.  Organizations need to either have the technical expertise in-house to design the right solution or seek the services of an outside provider.  There should be an understanding of what level of management their applications require and have an identified change management process.  

Performance.  As with a more traditional hosting model, it’s important to understand workload demands on the infrastructure.  Companies also need to understand what the bottlenecks are and how the cloud architecture they have or are evaluating can meet those needs. Organizations should perform their own testing to understand how a cloud environment affects compute, storage and network resources.

Compliance. Organizations need to understand where their data will reside as well as who will interact with it and how.  They need to understand which areas of compliance the service provider controls and how to audit against the standards and regulations to which they need to adhere.

PaaS: The cost saving “middleware” for cloud infrastructures

Today we hear from  Sarabjeet Chugh on  – PaaS: The cost saving “middleware” for cloud infrastructures

Not long ago, a survey of Fortune 100 companies showed that 77% of IT budgets go to maintaining the status quo.  Only 23% of the budget actually drives new revenue.  In recent years, a few dents have been made in IT costs by better development tools and clouds.   Development frameworks like Spring, Ruby on Rails, PHP, Python, Django framework, etc., let programmers code websites, web applications and web services more quickly, and clouds spread the cost of infrastructure components across multiple companies.

Nevertheless, infrastructure maintenance—at 42% of the budget—remains the single biggest component of the maintenance burden.  One thing that could make another dent in maintenance costs is an easy way to on-ramp an application into production in a cloud.  Getting applications to the cloud more quickly and deploying them with less programming to link the application to the infrastructure resources would decrease both development and maintenance costs.

New Middleware for New Architectures 

To do that, new middleware that works with the new hardware architecture of the cloud is needed. Existing middleware is antiquated.  Programmers spend nearly 50% of their time coding non-application functions, such as database caching, billing, metering, messaging and authorization. Different framework and database combinations need different versions of the middleware, and every version has to be maintained as databases and servers move within  enterprise data centers, public clouds or both.   

The new middleware should be a PaaS element that is open and supports multiple programming frameworks, from Java-based spring to PHP-based micro frameworks and Microsoft .Net, among others.  It also needs to be independent of the infrastructure, so it can support environments from public clouds built using different hypervisor technology to local laptops.  Similarly, it should be independent of the application business logic, so the application is not muddies with the logic for addressing databases and constructing messages and, thus, is more portable.  Finally, it needs to include a reusable library of services that can be easily assimilated into new and existing application code to simplify the programming of 3-tier applications. 

Accelerate Time-to-Market

The major benefit of PaaS is improved developer productivity and, therefore, an accelerated time-to-market. Organizations using PaaS techniques typically report operational savings of 30% or higher. 2011 is being termed as the year of the PaaS and for good reasons. Enterprise-grade IaaS has gained mindshare and acceptance in small-medium enterprises.  By leveraging PaaS, developers avoid the many hassles of updating machines and configuring middleware and can focus their attention on delivering applications. Reducing these obstacles means faster delivery of applications and making cloud portability a reality for enterprise applications.

 How much time does your staff spend maintaining applications for infrastructure changes?

Building Cloud-friendly Applications

Today we hear from  Sarabjeet Chugh, Director of Technology Business Development (Cloud Services and Infrastructure)

Cloud adoption is progressing rapidly.  Many companies are in the process of determining their migration strategy, and most vendors are refining their processes to provide a smoother on-ramp to the cloud. 

Now that cloud is a reality, we need to think about how application development has to evolve to fit the cloud.  The application life cycle is broken.  Programmers write code, run tests and “throw it over the wall” to Operations, where technicians then struggle to accommodate the resource requirements of the application.

Old Code is Often Slow Code

Applications heavy from poorly structured code that requires multi-gigabytes of memory and have a huge storage footprint can run in the cloud, but the expense will become obvious.  Further, such applications offer few options and little flexibility to mitigate expenses. 

New Technologies Improve

A cloud-friendly application is one that can be deployed on any platform, locally or in the cloud.  To achieve such an application, new application development frameworks, such as SpringSource from VMware, have emerged that help to tease out the application’s business logic from underlying resource requirements. They also improve developer productivity by providing supplementary web services, message routing, authentication and application-level services, such as memory caching and contingency handling.

By insulating the infrastructure-dependent components and permitting them to be resolved in the production environment, the application can be more portable, reusable and maintainable.   For example, a cloud-friendly application could run in your data center but failover to SunGard if an incident occurs.  Similarly, server images could be transmitted to SunGard and brought up with full affinity and metadata information.

Does your company have standards for writing portable code?

 Download SunGard’s white paper,All clouds are not created equal.”