Database Systems Corp.
Home  |   Contact Us  |   About Us  |   Sign Up  |   FAQ

predictive dialers and crm software
computer telephony software predictive dialer

ACD Automatic Call Distribution
Predictive Dialer
Contact Center
Linux Software Products
Unix Software Products

predictive dialers and crm software
Information

Linux Windows
CRM for Linux
Linux CTI
Linux Computer Telephony
Linux CRM Software
Linux IVR Solutions
Linux ACD Phone
Linux Contact Management Software
Linux Contact Center Solution
Linux Call Center Software
Linux Telephony Software
Linux Predictive Dialer
Linux Phone Systems
Linux IVR Software
Linux ACD Systems
Telemarketing Linux Systems

emergency notification systems


DSC Tech Library

Linux Information

linux crm software and windows linux windows This section of our technical library presents information and documentation relating to the Linux operating system especially as it relates to the telecommunications and Linux CRM Software marketplace. Since the Company's inception in 1978, DSC has specialized in the development of software productivity tools, call center applications, computer telephony integration software, and PC based phone systems. These products have been developed to run on a wide variety of computer systems and have been tested and operational on LINUX servers and systems. The following are articles and information regarding Linux and its applications in the telecom and business environments.

The Data Center of the Future

February 5, 2004
By Drew Robb -
editor@datamation.com

Data centers already account for over half of corporate IT budgets, and META Group projects a 70% increase in data center budgets over the next decade. But where that money specifically goes has changed drastically in the last few years. While servers and storage together comprised 20% of 2002 expenditures, those figures will drop in both relative and absolute terms by 2012. Software spending, meanwhile, is expected to more than double over that same time period.

But all is not well with data centers. Utilization remains low, and costs cannot continue to rise without jeopardizing the rest of the IT budget. But not all the news is bad, as there are several emerging trends that will likely permit data centers to boost performance while keeping costs under control.

The Rise of Linux

While Unix is far from dead, Linux is quickly becoming the operating system of choice in data centers.

"In 2004, Linux adoption will explode in every data center," says Ted Schadler, an analyst for Forrester Research.

This is partly due to the low cost of the operating system itself, but that’s certainly not the only factor, especially considering operating systems actually don't account for that large a percentage of overall costs. In fact, most companies are willing to pay for Linux in order to get a supported, enterprise version. According to Gartner, the Linux market will exceed $9 billion in 2007.

But beyond the lower cost factor, Linux offers the data center a wide variety of application choices. It runs on the IBM z90 mainframe all the way down to cell phones. It runs on low-end web servers as well as on eight-way, mid-sized boxes. It runs on laptops and workstations, and is the operating system of choice for clusters, which now comprise over 40% of the world's top 500 supercomputers. No other operating system provides this same range of options in the enterprise. All of which means that by standardizing on Linux, an organization can save on the number of different skill sets needed in the data center.

Smaller Servers

Once upon a time, "big-iron" dominated the data center. While mainframes like IBM's zSeries and the HP Superdome are experiencing a bit of a renaissance, there is also the trend toward using the smallest servers possible. Currently, that means blade servers. Though they have only been on the market for a short while, sales of these micro-servers surpassed $100 million in 2003 and will account for $3.7 billion in 2006, according to IDC.

Blade servers offer companies low cost scalability, since it is easy to assign a batch of these servers to a particular application rather than having to buy a more expensive server which then remains underutilized. Since blade servers stuff a dozen or more servers into a single box, they drastically cut the infrastructure costs for racks, cabling, and cooling. Then there is the ease of support. When one goes down it is a simple act to swap out a server card and let the system automatically rebuild the system.

One other factor contributing to the growth of blade server usage is Linux. Since it is a lightweight operating system that doesn't consume a lot of disk space or processor overhead, it is ideal for use on these small servers.

Virtualization

Companies have been moving toward storage virtualization for years. Now they are looking to do the same with the rest of their IT resources. Virtualization brings all the computing resources into a common interface, where they can be viewed as a single system.

This solves two major problems for the data center. First, it cuts down on the time needed for configuring and assigning resources, since the virtualization software dynamically assigns the traffic load to the best available server. Otherwise, the administrator has to set up the services handled by each machine. It also cuts costs by reducing over-provisioning.

A typical scenario today is for each application to be assigned to its own server, with a second server acting as a backup or development server. Without virtualization, both servers need to be oversized so they can comfortably manage the greatest anticipated traffic load. With virtualization, however, this over-provisioning of a single server can stop since all the available servers are viewed as a single system.

Evolving Standards

Finally, the data center of the future will be based on common standards in order to ensure greater interoperability and ease the management burden. Currently, there are two competing systems.

One of these is the data center markup language (DCML). DCML is an XML-based specification that provides a structured model and encoding to describe, construct, replicate, and recover data center environments and elements. It's a new effort that was started by EDS and Opsware in mid-October, 2003. Six weeks later, the DCML Organization had a website up and running (www.dcml.org), about fifty members, and plans to issue its 1.0 specification for public comment by the end of the year. Eventually, the organization will submit the spec to a standards body such as the Distributed Management Task Force (DMTF) for approval.

Microsoft, meanwhile, is offering its own XML specification called the system definition model (SDM). Last May, the company demonstrated SDM in conjunction with HP. SDM helps to automatically configure Windows servers and applications.

The DCML says its standard will accept SDM information in order to manage Windows servers as part of a heterogeneous environment. Microsoft, however, is not a member of the DCML. Also missing are major hardware manufacturers such as Dell, IBM, Hitachi, and HP. Computer Associates, BEA, BMC, and other major management software vendors, however, are part of the DCML Organization.

These are some of the factors affecting the future development of data centers. But what will all this add up to from the viewpoint of a data center manager? For starters, the job will become more about provisioning services than about knowing the ins and outs of all the data center's specific components. Just as consumer hardware and applications are plug-and-play, look for enterprise applications and hardware to become self-configurable as well.

Barring a disaster like SCO winning its lawsuits, many if not all of your machines will be running on Linux. Autonomic systems will correct most errors without human intervention. Open standards will make interoperability problems a thing of the past, and the hardware costs associated with stocking the new-age data center will be marginal. And while the data center won't run itself, it will be easier to manage than ever before.