Capacity Management | SERVICE DELIVERY

When managing capacity, some companies make the mistake of only reviewing and managing server use while ignoring network, disk, and tape capacity. Disk space often goes unmanaged in organizations. With the decreasing costs of disk space, many think it is easier to continue to purchase more rather than spending the time to manage it closely. However, with budget cuts, take the time and make the effort to review file management, file retention, and disk space to ensure optimization. One company did not have a good policy on tape retention; however, they saved $70,000 per year by developing a policy that met the business requirements, eliminating many tapes and avoiding the need to buy new tapes.
Revisit capacity projections in light of a downturn in business. Review network bandwidth and usage, server utilization, and disk use to determine if you are able to realize cost savings by downsizing or shifting equipment.
Implementing technologies that allow capacity pooling (e.g., server visualization, multi-tenanting, and clustering with load balancing for servers, or SAN and NAS for storage) frequently yield savings by moving to a single safety margin for all uses instead of separate safety margins for each component. Server virtualization has somewhat reduced the importance or emphasis on capacity planning as it also is much easier to add incremental capacity.

Service Level and Availability Management

Service level agreements (SLAs) balance the users’ desired service level and expectations with the associated costs. Therefore, if you need to reduce costs, it is prudent and logical that you should revisit service levels for possible reductions. Review metrics to determine actual service level performance to see if you exceed agreed-upon service levels. If you reduce service while still meeting the service level, do so if it allows you to reduce costs. Keep in mind that IT can improve service either by increasing the average level of service or by reducing the variability in service delivery. Be sure to communicate and obtain agreement for any changes in planned service to the business.
In light of necessary cost reductions, revisit service levels with each area of the business. You may be able to reduce required service levels and scale back resources or contracts. Review service levels with business value to ensure the costs and benefits are in alignment and the users do not establish service levels on emotion. It is easy to say you need everything available all the time with immediate response until you put a price tag on the request. Whenever possible, determine the actual costs of the requested level of service and various options. If you change the service level provided to the business, be sure to go back to vendors’ underpinning contracts and get price reductions for changes in service levels. Consider reductions to all components of service levels including:
  • Service hours
  • Availability
  • Throughput
  • Support levels
    Top Tip: Match response rate to business need

    "Reduce services by matching the response rate to the business need. For example, review your response to network monitoring. If it is an empty office building, do, you need to dispatch someone at 8 p.m? Monitoring doesn't cost you, but your response does."
    —Lynn Willenbring
    City of Minneapolis

  • Responsiveness
  • Restrictions
  • Functionality
  • Contingency
  • Security
  • Data retention
  • Backup requirements
  • Problem escalation
  • Costs
Calculate and communicate potential cost savings if you were to reduce the availability or performance requirements. Review changes to anticipated user volumes to determine potential impact to service levels or potential cost reduction areas.
Complete a regular review of current infrastructure components against required availability requirements with a view to optimizing equipment and lowering costs. With advances in infrastructure technology, it is often possible to upgrade components to new technology and increase availability while actually decreasing costs.
Continue to review metrics, availability, performance, and actual service levels to be sure that you are meeting the business requirements in light of cost reductions, delaying hardware upgrades, or other reductions that you have taken.

Process Frameworks | PROCESS OVERVIEW

Although quality initiatives have existed in businesses for many years, process improvement in IT is a newer area of focus for most organizations. While companies in Europe focused on IT process improvement slightly earlier, U.S. organizations did not jump on board until the late 1990s and early 2000s. There are various frameworks and guidelines that companies use for IT process improvement, including:
  • Information Technology Infrastructure Library (ITIL). ITIL is a high-level framework or foundation of recommended practices for IT operations. It is a customizable framework of checklists and procedures. You adopt and adapt ITIL in different ways according to the needs of your organization. Version 3 was released in 2007 and includes service strategy, service design, service transition, service operation, and continual service improvement. ITIL information is available at the user group itSMF, the Office of Government Commerce, or
  • Control Objectives for Information and Related Technology (COBIT). First released in 1996, COBIT is a set of recommended practices for IT governance and control. COBIT ensures that services and information meet quality, fiduciary, and security needs. COBIT 4.1 has 34 processes that cover 210 control objectives organized in four domain areas. The domains are planning and organization, acquisition and implementation, delivery and support, and monitoring and evaluation. It is a control and audit framework providing a set of key goal and performance indicators, and critical success factors for each of its processes. COBIT leverages ITIL to identify control points. COBIT information is available at
  • International Organization for Standardization (ISO). ISO provides international standards for quality management systems and specifies the requirements for products, services, processes, materials, and systems. It is a global network ensuring quality, ecology, safety, economy, reliability, compatibility, interoperability, efficiency, and effectiveness.
  • Capability Maturity Model (CMM). CMM is a method for evaluating and measuring the maturity of the software development process. It identifies five levels of process maturity: initial, repeatable, defined, managed, and optimizing. Within each of the maturity levels are key process areas for goals, commitment, ability, measurement, and verification.
  • Capability Maturity Model Integration (CMMI). CMMI provides guidance for improving processes to manage the development, acquisition, and maintenance of products or services. CMMI information is available at
  • Six Sigma. As you are probably aware, Motorola originally implemented Six Sigma to improve quality by controlling and removing defects and variation. Sigma is a standard deviation and notes that if you have six standard deviations between the mean and the limit, you will have practically no failures. As the process standard deviation increases or moves away from the center of tolerance, fewer standard deviations will fit between the mean and the limit, increasing the likelihood of items outside of specification. It is about improving and innovating processes. Six Sigma information is available at or
  • Lean IT. For years, manufacturing has used lean techniques. IT is recently adopting lean techniques to eliminate the waste from the value stream and improve the cost, quality, speed, and agility of IT processes. Individuals such as Henry Ford, Edward Deming, and Kaoru Ishikawa, and companies like Toyota, defined and perfected these principles. Core principles of lean include focusing on the customer, continuously improving, planning for change, automating processes, empowering the team, designing quality in the processes, and optimizing the whole.
  • Sarbanes-Oxley (SOX). Initiated in 2002, these are standards for accountability in business practices for public companies valued at more than $75 million. It includes the requirement that the CEO approve the verification of financial numbers, an annual assessment of internal financial controls, and real-time reporting of events that could materially affect financial results. Although SOX is usually extra work when compared to process efficiency improvement, it is a requirement and a methodology to integrate with your processes.
  • Committee of Sponsoring Organizations of the Treadway Commission (COSO). COSO is a framework for financial controls to address regulatory compliance. It provides guidance on governance, business ethics, internal controls, risk management, fraud, and financial reporting. Find COSO information at
  • Microsoft Operations Framework (MOF). MOF is a structured approach to help customers achieve operational excellence on the Microsoft platform. It includes recommended practices, principles, and models for high availability, reliability, and security for mission-critical systems. ITIL is the foundation for MOF. Release 4.0 was completed in 2008 and integrates governance, risk and compliance activities, management reviews, and recommended practices. MOF includes three phases: plan, deliver, and operate, with a foundational layer for managing. MOF information is available at
  • ISO/IEC 20000, British Standard (BS 15000). This is an international standard for IT Service Management. It is complimentary with ITIL but also uses components of COBIT and MOF. It includes a specification and a code of practice. It is based on the ISO principle of document what you do and do what you document. BS 15000 information is available at or
  • Project Management Body of Knowledge (PMBOK). PMBOK is an internationally recognized standard that provides fundamentals for project management. It uses five process groups: initiating, planning, executing, controlling and monitoring, and closing. Information on the project management institute and PMBOK is
Rather than adopting any single framework, use elements from all these frameworks and methodologies as there are useful components from all of them to help you. ITIL and COBIT seem to be the most used frameworks at this time. Figure 1 shows how one company integrated the various frameworks and explained how they would each apply to improve processes and reduce costs. Bob Lewis, in his latest book, A Manifesto for 21st Century Information Technology has several excellent chapters discussing processes and practices.

Figure 1: Process Frameworks


In the context of process improvement, waste refers to any expenditure of time, effort, or money that does not result in a corresponding increase in value in the eyes of the company's external paying customers. Process improvement reduces costs by improving the following areas of waste typically found in IT organizations:
  • Repeatedly fixing the same incidents; duplication of effort
  • Reworking failed changes, cost of errors
  • Reinventing the wheel; solving problems and writing software that already exists elsewhere
  • Maintaining data in multiple places
  • Maintaining assets (e.g., software, applications, network lines) that are not used by the business
  • Late detection of errors leading to an excessive expenditure of time, effort, and money in remediation
  • Misallocation of resources or confused employees, resulting in time spent on less important work
  • Customer outages impacting customer satisfaction or revenue
  • Unreliable, inconsistent service, business disruptions, poor system availability and performance, or missed project delivery dates resulting in lost revenue, lost opportunity, or lost productivity in the business
  • Missing project budgets resulting in additional costs to deliver
  • Misusing investments or assets, or perhaps not using them to their full potential
  • Lost time dealing with preventable problems and firefighting
  • Demotivated employees not achieving full productivity
  • Completing tasks that can be done through automation, such as physically printing and delivering reports
  • Penalties and fines


Data Center Consolidation

Whenever possible, consolidate data centers. Data centers are most cost-effective and efficient when running at a high capacity. A newer highly efficient consolidated data center has much lower total costs (per square foot or per delivered watt) than many smaller data centers. Every data center costs money in space, power, cooling, bandwidth, administration, maintenance, and support. You may be able to save a building, floor, or room by consolidating data centers. The cost savings in consolidating data centers might not be short term, but it has a significant long-term impact. Several large companies are consolidating worldwide data centers into a few super data centers that service their facilities. One company estimated savings up to $2,000 per square foot of consolidated data center space.

Automation and Remote Management

New tools and utilities are being continually developed to automate support of the data center, improve efficiencies, provide remote management, and improve availability of your infrastructure. These tools save a considerable amount of labor costs and reduces costs of outages. They move from reactive maintenance to proactive and preventive support services, which significantly improves the efficiency of the IT support staff. Remote diagnostic tools reduce IT support costs and enables consolidation of data centers and cost reduction.

Power Usage and Green IT

Companies spend a lot of money on wasted energy including servers or desktops that sit idle at night. Each minute that a computer uses less power translates to reduced electricity costs. One company estimated that powering and cooling one server cost them an estimated $3,000. Managing power usage results in cost savings. One company saved millions of dollars by implementing a power management tool, while another estimated cost reductions of around 20 percent. By turning off unused desktops, laptops, and servers, they saved $40 per machine per year and received utility company rebates of over $10 per computer. They also improved security as turned-off machines cannot be infected or compromised. Synchronize power management with maintenance and update schedules.
Top Tip: Power usage

"We significantly reduced power usage through data center improvements and replacing old equipment with newer, more efficient equipment. We implemented best practices with hot and cold aisles, raising the temperature in the data center, shutting down computers, and making people more power conscious. We found out that the cost of cooling machines can be more than the cost of buying a machine."
—Anne Agee
University of Massachusetts,

Another company saved cooling costs by raising operating temperatures in the data center from 75 to 77 degrees, using variable-speed fans in the computer room air-conditioning units, optimizing the airflow under floors, and implementing cold-containment techniques. Consider implementing more energy-efficient servers, disk, and other hardware. Audit and review energy bills to find overcharges. One company estimated a 60 percent savings in energy costs with consolidation, virtualization, installing newer and more efficient equipment, and implementing tools providing power management.
Industry averages show that servers account for the most of data center energy costs (31 percent), followed by heating, ventilation, and air conditioning at 17 percent, storage devices at 14 percent, and network equipment at 13 percent. Virtualization and consolidation are two of the most popular and beneficial green initiatives with companies estimating approximately 15 percent savings each.

Desktop Virtualization and Thin Clients

Virtualization of the desktop is not as prevalent at this time as server virtualization, but many companies are investigating desktop virtualization and implementing pilots. Virtualization actually provides a layer between the hardware and software and provides a logical view of the computing resources. Therefore, each server acts like a group of servers, each disk drive functions like a pool of disks, and each desktop uses centralized computing power. Machine virtualization actually inserts a virtualization layer (hypervisor) between the operating system instances and hardware, whereas application virtualization inserts the layer between the application and the operating system. Therefore, when the user needs an application, the server software downloads the application to the machine and it runs as if it were installed. When the user is done, it is uninstalled and available for another user.
Top Tip: Desktop virtualization

"We used desktop virtualization to be able to run applications remotely during a disaster. We were able to do this during hurricane Gustav. This provided an up-time gain with minimal costs that we did not have in place for Katrina."
—Roger Champagne

Top Tip: Desktop virtualization savings

"We are starting a desktop virtualization program. We anticipate a minimum of 15-20% savings"
—Paul Kay
Long Term Care Group

Desktop virtualization moves the end-user operating environment from a dedicated piece of hardware in a local PC to a virtual machine on shared hardware. You can virtualize applications by hosting, updating, and patching a single application instance and delivering the functionality over the network rather than deploying and maintaining instances on each individual PC. It is a viable option for cost reduction depending on the application and business needs. You are able to realize cost savings in virtualizing clients by decreasing support and maintenance costs, and reducing downtime due to desktop issues. One company reduced desktop total cost of ownership by an estimated 20 percent by virtualizing desktops and centralizing desktop management. With a virtualized desktop environment, you are able to access a desktop from any location using any device, which increases functionality and supports remote use.
Virtualizing desktops can be a major endeavor, and you should proceed slowly. Although the virtualized client hardware and software is significantly less expensive than a full PC, you need to consider the additional costs of the back-end infrastructure required. It has a significant impact on the network capacity. Be sure you review the network to ensure it is robust enough to carry the additional traffic, and include network and server upgrade costs in the return on investment calculation. Desktop virtualization may require a significant up-front investment, and cost savings will be long-term rather than short-term. A partial implementation of desktop virtualization may actually increase your overall costs as you have to maintain the overhead of both methods. During an interview, a CIO was not convinced of the cost savings of desktop virtualization and he commented, "You have the additional blade in the data center, and you have the additional costs of the blade in an expensive facility, which has to be less expensive than that of a PC for any benefit to be realized. You also have introduced a single point of failure."
Top Tip: Desktop virtualization and the network

"We did a pilot for desktop virtualization. The problem with desktop virtualization is that any latency on the network is very painful. Every network has some latency. As you lower the costs on the desktop, the provisioning costs on the network can go up. However, there are some places that it does make sense."
—Samuel J. Levy
University of St. Thomas

Investigate alternative client architectures and other methods to reduce the requirements for desktops to provide a more thin-client approach, which reduces support costs, maintenance costs and delays the need for upgrades. Citrix is a common example of software that many companies have deployed to reduce desktop costs. Another example is VMWare's ACE product, which still virtualizes the desktop but runs the result using the desktop CPU and dramatically reduces the data center footprint of desktop virtualization.

Microsoft Licenses | DESKTOPS

Microsoft license fees are a major expense and financial commitment for companies, whether it is for Microsoft Windows, Office, server operating systems, or even Microsoft back office applications such as Customer Relationship Management (CRM) and ERP. Negotiating with Microsoft is a challenging prospect, particularly for small- and medium-sized organizations. Many companies poorly understand Microsoft licensing and do not manage Microsoft licenses as effectively as they could, which costs a significant amount of money. Microsoft has unique terminology, licensing and pricing structures, policies, software bundles, and frequent changes. For example, Microsoft removed Outlook from Exchange Server 2007 and put it in the Office 2007 suite, which changed licensing fees for some companies.
It is well worth your time to understand Microsoft's products, roadmap, and licensing, or hire a company to assist you. Before starting negotiations, know the products you are licensing, the volume of purchases, and your upgrade plans. It is also beneficial to centralize software license purchasing and negotiation to take full advantage of cost savings opportunities. As mentioned in an earlier chapter, make sure you are compliant with licenses at all times as hiding overuse results in significant fines and loss of negotiation leverage.
Top Tip: Desktop licenses

"Reduce costs of the desktop by having multiple licensing strategies. For example, the cost of desktop operating systems is high. Unless you are going to upgrade, you don't get much value. Consider not having maintenance on certain products that you do not intend to upgrade."
—Mike Degeneffe

It is important to calculate the financial impact of the various options provided by Microsoft and other upgrade scenarios. Compare a la carte pricing to product bundle pricing and the options for various client license types. For example, Microsoft's Client Access License (CAL) is assigned on a per user or a per device basis. Each CAL gives either one user or one device rights to access all instances of the Microsoft product. It is more cost-beneficial to apply user CALs when you have several devices used by one user. Device CALs are more cost-beneficial when you have devices shared by multiple users such as workstations in a 24-hour manufacturing plant or call center. Although Microsoft recommends that companies standardize on one type of CAL, this actually increases costs for organizations with both types of users. You may be able to save costs by purchasing a mix of user and device CALs.
In July 2008, Microsoft announced the Select Plus program which allows business units to receive volume discounts from the enterprise level without submitting forecasts of demand, with no expiration date for purchases. As there is no expiration date, it means that terms are not renegotiated. This means terms are critical when negotiating the first time. Qualifications for discount levels are based on the previous year's actual purchases, which mean that the timing of transactions is important because you move up a tier as soon as a transaction puts you over the volume threshold. Companies that consolidate demand may be able to achieve a better discount. Negotiate other terms in addition to price, such as planning or upgrade services and training.
Microsoft's maintenance agreement—Software Assurance (SA)—is expensive, as it is typically 25 percent for server products and 29 percent for desktop products, which quickly doubles the acquisition cost of software. In other words, in four years you pay 100 percent of the server price in maintenance costs and 116 percent of the desktop price. SA provides version upgrades, the ability to spread license payments over the terms of the agreement, support, training, and desktop optimization. Microsoft packages some enhancements at an additional cost. It is a major decision for a company considering whether to purchase SA as you need to consider your own upgrade strategy as well as Microsoft's future releases. For example, SA may not be cost effective for you if you plan to implement Microsoft's next release more than three years out as it will cost between five and six years of SA payments to qualify for the new version. If you skip a version, you end up paying twice the license fee than if you bought the licenses when needed. For example, one company purchased an enterprise agreement and upgraded to Windows XP in 2003. They decided to skip Vista and planned to implement Windows 7 in 2011. They purchased three, three-year SA terms to get upgrade rights. Therefore, they paid 261 percent of the original XP price for the Windows 7 licenses.
Review the cost of various options to SA maintenance and enterprise agreements, such as:
  • Buy or renew a component of an enterprise agreement for a subset of products and combine remaining purchases with a SA.
  • Renew an SA only for certain products or for some users. SA is not all or nothing. For example, if you upgrade products or users at different rates, apply SA coverage only to those products or users.
  • Purchase licenses when needed under an SA.
  • Consider not renewing the enterprise agreement if upgrades are more than four years apart.
  • Consider deferring expenses and buying licenses when needed. For example, if you need to cut costs now, it may not make sense to pay for three more years of SA coverage to obtain Windows 7 upgrade rights.
  • Consider the business version of Windows operating system. Vista comes in business and enterprise versions with the business version coming standard on equipment purchased. If you do not need the enterprise version of Windows, you may not want to involve those licenses in volume purchasing or SA coverage. When you upgrade the operating system, you may need a new upgrade of hardware anyway.
  • Implement improved software asset management processes rather than purchasing an enterprise agreement.
  • Do not consider SA the same as purchasing premier support, nor is SA a requirement for premier support.
  • Centralize license purchasing to achieve larger discounts and increase negotiating leverage.
  • Consider reassigning licenses to eliminate the need for purchasing new licenses.
  • Consider purchasing a mix of user and device CALs.
  • Assign CAL licenses to minimize spending.
  • An enterprise agreement evenly spreads out payments with predictable annual costs rather than upfront costs but may not make sense in times of budget cuts.
  • Re-evaluate the costs and benefits of unpredictable upgrade cycles and skipping versions particularly if you have SA.
  • Do not pay SA for unused products. If you do not use all the software that is covered in the agreement, you may be wasting money. For example, if you purchase Core CAL or Enterprise CAL bundles and only use some of the software, you are paying more when factoring in the cost of SA than if you would have purchased each CAL under an agreement without SA.
    Top Tip: Audit upgrades

    "Although standardizing is a good move, you do increase risk and reliance with one vendor. Take each renewal or upgrade and audit against the business roadmap to make sure it is worth it."
    —Roger Champagne

  • Evaluate if virtualization saves money (discussed later).
  • Negotiate discounts for SA.
  • Wait for operating system upgrades with a hardware refresh.
  • Consider open source alternatives (discussed below).
There will continue to be changes in the pricing of Microsoft licenses and maintenance agreements. The point is that you need to understand current Microsoft license terms and costs to determine the most cost-effective option for structuring maintenance and upgrades as the difference in total cost is substantial. You may have more leverage than you think with Microsoft negotiations if you consolidate your negotiations for no-choice software (like Windows and Office) with negotiations for software in categories where Microsoft has to compete aggressively (e.g., ERP SQL Server).

Storage Consolidation, Virtualization, SAN, and NAS

There are continual advances in storage technology that drive down storage-related costs, including storage consolidation using storage area networks (SAN), network-attached storage (NAS), and storage virtualization. Improved disk drive performance and storage utilization provide ways to get more for less. Implementing new storage technology allows you to sunset lower-end storage and saves costs in footprint, power, maintenance, and support.
Top Tip: Disk technologies

"We experienced tremendous cost avoidance by implementing new technology for disk archiving, de-duping, and compression. It took two and a half months to realize the value, but we experienced a 90% reduction in disk space. This also saved rack space, power, maintenance, and depreciation."
—Greg Hayhurst
Tennant Company

Storage virtualization makes various separate hard drives act like one large storage pool. With virtualized storage, you spend less time managing storage devices, storage is more efficient, and you are better able to manage utilization. You can add or replace drives without affecting other storage devices as the virtualization software manages traffic. Backup and mirroring are faster as you only copy data that has changed. Rather than having small, unmanaged pools of storage located throughout the company, you are able to realize significant cost savings by consolidating storage in a SAN and virtualizing storage. The control layer of virtualization allows the data to be physically located in a remote site or at multiple sites, which provides options for backup and disaster recovery.
Some companies have also reduced costs with storage virtualization by doing thin provisioning, which is fooling a drive into thinking it has more capacity as excess data is stored on another drive.
Array-based virtualization also provides expansion to an entire array of disk Network-based storage virtualization manages storage over the network.
Advancements in virtualization and SAN technology will continue to provide more storage at reduced costs. Storage is definitely one area where retaining older technology is actually more expensive than implementing newer state-of-the-art options.

Disk De-Duplication

De-duplication is another storage technology that is saving companies money. It searches for duplicate files and consolidates to a single file. This is possible with storage virtualization.

Tiered Storage

Often, companies assign storage based on what is available rather than analyzing the right storage based on business needs. Tiered storage provides lower cost storage for applications that access storage less often or for areas that require lower performance or reliability. This can save 30 to 50 percent of storage needs. Match the business needs to the storage performance and reliability in order to provide overall storage at a lower cost.

Manage Use of Storage

Similar to network bandwidth, you can never have enough storage. In a cost-conscious environment, it is not prudent to continually purchase additional storage without proper management of the storage you have. Ensure that you properly manage your storage usage and report storage costs. Companies often use only a percent of the allocated storage leaving hidden pockets of unclaimed storage. The following are ways that companies have saved money in the management and use of storage:
  • Report and analyze how much storage you have, how much is allocated, and how much is used. Right size file allocations to only what you need for the next six months or less.
  • Limit personal storage volumes. Have and enforce a policy on storage use.
  • Follow policies for storage left from terminated employees as reclaiming even a few gigabytes per employee amounts to savings.
  • Enforce archiving policies that are consistent with records management policies.
  • Reduce tape volumes. Review backup and retention policies. Implement de-compression and de-duplication technology. Eliminate third party, off site tape storage if possible. Eliminate legacy tape cartridges.
  • Review and clean up storage on a regular basis.
  • Examine the possibility of using storage in the cloud services that many WAN and Internet services provide. This can save on acquisition and support costs.

Cloud Computing | SERVERS

Cloud computing means that services are delivered through the Internet rather than through an in-house data center. Cloud computing has built-in scalability, efficiency, economies of scale, and cost savings. Applications run on fewer machines and you are able to consolidate servers. Operating system virtualization and other software helps companies create private clouds that improve utilization of computing resources. With cloud computing, you pay as you go, without large initial investments. Cloud computing can be hardware clouds (e.g., Elastic Computer Cloud offered by Amazon Web Services), software clouds (e.g., SaaS or, or desktop clouds (e.g., Google Docs, Yahoo's Zimbra, and Microsoft Live). As mentioned in Chapter 4, additional options and variations in cloud computing are emerging, such as platform-as-a-service (PaaS) and database-as-a-service (DaaS). Companies are creating their own in-house cloud services.
Currently there is a lot of hype about cloud computing. However, cloud computing seems to be in the early stages of implementation for most companies. Several firms interviewed had plans to make major moves, such as moving e-mail and collaboration to cloud computing and they hoped to realize savings as high as 50 percent. Some companies tried moving to cloud computing with test and development environments.
Top Tip: Cloud computing

"Using cloud computing for hosting services has created significant reduction in costs."
—Larry Bonfante

Top Tip: Cloud computing is an inexpensive option

"We are looking at cloud computing. It can be a good inexpensive option for certain things. We will save $120K by outsourcing student e-mail to the cloud as e-mail is a utility. We are looking at moving test and development environments to the cloud. We hope to develop a collaborative private cloud."
—Anne Agee
University of Massachusetts,

The following are cost implications to review when considering cloud computing:
  • Older applications may not be able to operate on a cloud and modifying them could be cost-prohibitive.
  • Mission-critical applications may not be the best place to start with cloud computing as you might want to prove reliability and performance.
  • Highly regulated industries may have compliance issues, concerns, or additional measures related to cloud computing and may not be the best candidates. No matter what industry you are in, address data protection and content management challenges.
  • Reliability and availability is an issue affecting costs. Understand your business needs and make sure the vendor meets your requirements. Some vendors allow you to pay less for noncritical applications.
  • Define requirements and commitments including availability, performance, reporting, incident resolution, backup, disaster recovery, capacity, and bandwidth in SLAs. Include contractual obligations with penalties for failure to deliver.
  • Have various levels of security, such as company-based security, role-based security, and Virtual Private Network (VPN) transport-level security.
  • Look at high-cost, underutilized parts of your environment to consider for cloud computing. For example, test environments may be a good place to start.
  • Understand your roles and responsibilities relative to cloud computing because it does not necessarily mean that everything is done for you.
  • Although the upfront investment for hardware and software is low, the costs may be spread out over months and years. Be sure you calculate the long-term cost of ownership when comparing costs. In addition, make sure you are comparing similar levels of high performance and best practices, such as content delivery, load balancing, and caching.
  • Pay attention to usage terms and fees because they can be a factor in mounting costs.

Hosting Services | SERVERS

Analyze the impact of running applications on hosted external servers or doing the opposite and bringing hosted services in-house. With the competitive pressures of on demand services like software-as a-service (SaaS) and cloud computing as well as decreasing infrastructure costs, the cost of hosting services are expected to decline considerably over the next few years. Consider the following cost reduction possibilities relative to hosting services:
  • As outlined in cloud computing, have requirements documented in SLAs with penalty clauses for noncompliance. Renegotiate if there are constant service shortcomings.
  • If allowable in the terms of your contract, renegotiate your contract for lower costs. To realize a lower price, you may need to sign a longer contract, include broader services, have a lower SLA, or use of offshore resources.
  • A strong and close partnership and relationship with your provider is also helpful when renegotiating a win-win solution given your cost reduction goals.
  • Include a clause for annual rate review in order to obtain reductions in market rates.
  • Carefully review baseline volume commitments given a decrease in business volume.
  • Include a business downturn clause in the contract to account for layoffs or the sale of a division.
    Top Tip: Hosting

    "We were able to save a significant amount of money by changing hosting partners. We did this by taking advantage of virtualization, using cloud computing for non-production environments, and restructured the landscape using the QA environment for our disaster recovery environment. We also kept multiple players in the mix to get favorable pricing."
    —Lina Shurslep
    Navarre Corporation

  • There continue to be changes and increased flexibility in how hosting, application, and managed service providers charge for services. For example, some providers are moving to utility-based pricing model with charges per user per month over a three-year period rather than fixed costs over the three-year period. Review options on a regular basis and determine the most cost-effective option given your business needs.

Linux and Open Source | SERVERS

Open source has come a long way toward being strong enough for enterprise-wide mission-critical use. Many good open source alternatives save upfront costs, development effort, and on-going support costs. You must evaluate if open source software is right for your organization. It depends on several factors, such as:
Top Tip: Open source

"Open source is not as easy as it looks at first glance. You trade license and maintenance costs for in-house staff to support the application."
—Samuel J. Levy
University of St. Thomas

  • Compliance and security requirements
  • How broadly the application is used
  • How widely you support the application
  • The activity and size of the user community
  • How much risk the organization is willing to absorb
  • How much cost savings would result
Of course, the Linux operating system has grown in popularity as many companies have significantly reduced costs while addressing security and reliability issues and avoiding single-vendor environments. There are several different flavors of Linux. Although cost comparisons vary from environment to environment, the following are ways that some companies have reduced costs and total cost of ownership using Linux:
  • Free, open-source systems with advanced versions still significantly less expensive than the Windows server operating systems
  • Fewer administration, maintenance, and support costs and a quick learning curve
  • Increased flexibility and adaptability with open source
  • Fewer hardware costs, improved performance
  • Less system downtime, improved reliability, higher end-user productivity
  • Fewer security and virus attacks, less security holes, free online security updates
Many companies start in the open-source area for noncritical applications and expand their use as they gain more comfort and experience.
Consider open source for a variety of areas. For example, there are lightweight components such as Spring, Jetty, and Tomcat that are options to JEE application servers like Weblogic. There are also alternatives such as MySQL rather than expensive database licenses. Many integration solutions (for example Jitterbit) can lower costs and development time.

Server Virtualization

Virtualization provides companies with cost avoidance savings as it improves the use and scalability of the entire infrastructure and avoids having to purchase additional hardware. It helps IT to be more responsive and agile when handling changing business needs. Virtualization is not limited to servers, but also is applicable to desktops, network, application, and storage environments. Implementing virtualization is an innovative way to use newer technology to reduce costs. Even though it is newer technology, server virtualization has similarities to the old days of partitioning mainframes.
Server virtualization saves money on rack space, power and cooling requirements, maintenance and support, and disaster recovery. Virtualization consolidates an average of six servers, but it can be 15 or more. Server virtualization is now mainstream technology and most IT shops use it to streamline capacity management, use resources more efficiently, save money, and provide improved scalability. In fact, just about every company interviewed had implemented some degree of server virtualization and had realized significant cost reductions. Many companies estimated that server virtualization saved them at least 35 percent of their server costs. If you have not investigated and started using server virtualization, you need to do so. Although it will require an initial upfront investment, virtualization savings are not necessarily short term (unless you are at server capacity), but more a long-term cost of ownership savings and cost avoidance when buying additional servers in the future.
Top Tip: Virtualization

"Virtualization does not immediately reduce costs but provides a future reduction. We anticipate $1M savings starting in 2010 due to server virtualization as we won't have to refresh aging servers."
Energy Company

Be sure to investigate the impact of virtualization on all your software licenses as it could increase or decrease your license costs depending on how you have agreements structured. You may be able to reduce the license costs for applications that you rarely access. Microsoft has made some modifications in software licensing to accommodate the move to virtualization. In September 2008, Microsoft changed the 90-day reassignment rule except for server operating systems. Previously, you had to assign server and other software to one server for at least 90 days before you could assign it to another server. For a virtualized server farm environment with load shifting between multiple servers, you would need to assign licenses to both servers, making it cost prohibitive. The new terms allow some Microsoft products to be assigned within a server farm in up to two data centers as long as they are in time zones no more than four hours apart. For example, if you were running Microsoft Exchange Server on a Virtual Machine (VM) and wanted to migrate the VM to two other servers in the farm, you would need three exchange server licenses and three Windows server licenses, which would be very expensive. With the rule change, you only need one exchange service license, which is a 66 percent savings, and enough Windows server licenses to cover the operating system running on the physical server and the VM machine. Therefore, you are able to take advantage of the VM failover capabilities and increase the utilization rate and capacity of the machines. From a licensing perspective, you can see by these examples that moving to a VM configuration can be confusing.
Virtualization also affects the type, or edition, of Microsoft licensees that is the most cost-effective for your needs. For example, Windows Server 2008 is available in five different editions with significant cost variation: Standard, Enterprise, Datacenter, and two other niche methods. The Standard edition comes with one physical instance and one virtual instance. The Enterprise edition has one physical instance and up to four virtual instances. The Datacenter version includes an unlimited number of operating system instances. If you plan to host more than three VMs on a server, the Enterprise or Datacenter editions are the most cost-effective. The Datacenter version is most cost-effective when the maximum number of VMs is greater than four times the number of physical processors in the server. Server licenses for all editions of Windows Server are assigned to physical servers, not to VM machines. Therefore, an enterprise edition on a physical server running two Windows Server VMs runs up to two more VMs without requiring additional operating system licenses. The bottom line is you need to determine the most cost-effective edition based on the number of VMs you have. Of course, you need to check with Microsoft, or any other vendor, for current licensing configurations and rules, but the examples outlined above show the complexity in Microsoft and virtualization licensing. It definitely is not for the faint of heart and takes time to understand and design the most cost-effective solution for your environment.
In addition to reviewing software licenses, the following are additional considerations that you should review when considering virtualization as they may have an impact on costs:
  • Impact to the network. Once an organization implements server or storage virtualization, the impact may be significant enough that you need to redesign the network to handle the new network traffic pattern.
  • Impact to storage. Virtualization impacts the storage architecture, particularly for direct-attached storage systems.
  • Server workloads. You need to have a good understanding of the server workloads and business priority to determine what you should consolidate into the virtualization platform and what you should not.
  • Tools for management of the infrastructure. You may require additional tools to manage the virtualized environment.
  • Training of support personnel. The virtualized environment adds a layer of complexity. Make sure you train the technical staff to cover design, implementation, and support. Identify these costs in the initial financial analysis of implementing virtualization.
  • Book life of assets. Equipment that is nearing the end of the life cycle and end of book life is the best candidate for virtualization.
  • Compliance and security requirements. Virtualization may not be the best option for heavily regulated industries or applications.
  • Consider virtualization testing. Microsoft's free Virtual Server and Virtual PC (see allow you to test virtualization, for example.
  • Consider open source options. For example, FreeVPS provides some free open source virtualization options.