Streamyx Streamyx Combo

Thursday, May 22, 2008

 

Is Gigabit Ethernet (GigE) A Good Choice For Business Bandwidth?

For those outside looking in....it appears that the "Broadband Nation" in the business world is going a bit crazy over Gigabit Ethernet of all capacities. Careful....there's more to the story than meets the eye (or bandwidth meter.... 'wink').

You can sum it up as such....more often than not users, both inexperienced and power users, seem to be under the impression that bandwidth is related to end to end speed with any data. Where in fact that is only one half of the equation. You must look at the actual data volumes being sent as well....to determine if there are any concerns, bottlenecks or other performance issues.

Now take a step back and really look closely at your usage. For example, you've probably been asked many times to verify if the links were saturated for various servers, ie mail servers hosting hundreds of mailboxes.

One would expect that it be very busy all the time, where actually most baselines will show hardly ever moving past a light 25% or in that case 25Mbits/sec (100Mbps link) and for the record, that is most often combined both directions on a Full Duplex link.

The moral of the story is not needing or even wanting 1000Mbits/sec or even 10000Mbits/sec. It is if the need or want is 101Mbits/sec (in whole numbers here for argument sakes) that warrants the next level throughput and or 1001Mbits/sec to quantify considerations for the big 10000Mbits/sec.

The price per port now for gigabit ethernet surely warrants no need to forcibly stay away. However careful planning will continue to ensure reusing the ever increasing copper infrastructures and save on the now outdated need of fibre-for-gig (in many cases) that not so many years ago were not heard of by most. Now you can get a gig switch from LinkSys or D-Link for several pennies. The bang for the buck is at a great price point. Costs elsewhere in high end components offer other values not really important at this time, but more for good reason.

I'd suggest first baby steps while looking at 100Mbits/sec infrastructures and how to perhaps add bandwidth by teaming, or aggregating connections like Fast EtherChannel or Switch Assisted Load Balancing (example terms used by Cisco and HP respectively). This to create simple increments by adding network cards/ports one at a time often reaching near gigabit speeds without having to replace the switching component devices. This approach may be more cost effective and a tried and true technology seldom used. However it still offers even more than just speed such as redundancy. Of course these aggregating techniques have evolved also into the gigabit market. But generally they hold value in the super high end of volume throughput requirements..... and both typically not used at the access layer (aka desktop layer).

Some years ago, Cisco introduced the 40xx series switches. The first model had several slots, each of which had 6 gigabits of throughput. Two cards could plug into this slot. One had six 1 gigabit slot, intended primarily for interswitch trunking. The other had eighteen 1 gigabit slots, which, at first glance, would seem a bad idea.

The rationale for the 18 port card, however, was that typical Windows servers of the time, with typical processors, could only drive 300 Mbps or so of throughput. There was a net benefit to plugging such a server into a gigabit port, because the individual bits would clock in and out in one nanosecond, rather than 10 nanoseconds on a Fast Ethernet interface. Without going into a lot of queueing theory, this is, statistically, a good thing.

Using parallel 100 Mbps cards still holds down the bit transfer rate, but also adds a nontrivial cost of additional cables and physical ports.

Traffic statistics can be misleading, if you look only at the average transfer rate and conclude the link is little utilized. Especially if you are dealing with delay-sensitive applications such as VoIP, you need to consider the peak, not average utilization, because it is at a peak when you are most likely to encounter delay. For routine transaction processing, average load may be good enough, but not for audio and video, where the effect of delay is cumulative.

Now I'm a big fan of Gigabit Ethernet. But the driver is QoS in a very broad sense, not just traditional traffic analysis but more closely related to how video/multimedia can be supported for globally connected hosts connecting to server clusters.

Examine the existing teletraffic distribution in most enterprises and most do not need big GE, but a few already do and some really dynamic organisations will be shouting yes please.

If we change the perspective, there are genuinely organisations out there who will want to offer those services that could not previously be offered by virtue of the LAN bottleneck or resource exhaustion at the server cluster.

Just remember that it doesn't provide a universal best fit for all enterprises and users.

The pricing for Gigabit Ethernet infrastructure is a major factor in implementing it in any organization. While more and more server vendors are equipping their servers with 10/100/1000 NICs (powered by Intel and Broadcom chipsets), the network side of the house (if not purchased/installed in the last 2-3 years) are still most likely 10/100Mbps.

It makes sense to require all new servers to be 1000Mbps capable, but on the switch side of things: if there isn't a widespread need for 1000Mbps, don't retrofit your entire architecture with it. Do a blade or two on your core switches, and a smaller distribution switch or two for the few aggregation points that do need it.

As the price per port comes down for GigE, cash in on vendor trade-ins to upgrade your equipment. Additionally, watch your bandwidth. There's always been a historical trend where whenever there's additional capacity people get really lax on the management of resources.

640K used to be a memory barrier, forcing people to be efficient with their code, now you have gigs upon gigs where people get sloppy with their code, database indexes, etc. and end up requiring more memory. The same goes for network traffic. But, I am not advocating that you're stingy with your bits, just sensible.

Now, 10GE is a different story, and I can only currently see a need for it in linking campuses and buildings with a significant amount of high-bandwidth users. The price of a single 10GE XENPAK is 2-3 distribution switches, or even a bandwidth shaping device.

The final practical message.....choose carefully.

Michael is the owner of FreedomFire Communications....including DS3-Bandwidth.com and Business-VoIP-Solution.com. Michael also authors Broadband Nation where you're always welcome to drop in and catch up on the latest BroadBand news, tips, insights, and ramblings for the masses.

?gclid=cpreiimmm5mcfrgxewod5dbsqg
?gclid=cnp03fuqnpmcfrqbewodhixzra
?gclid=ck7igfomojmcfqsyewodjxctra
?gclid=cnab1 Kiojmcfrmcewodjhatqq

Streamyx Combo
?gclid=ci3p5chumjmcfqq2egodlwx7rq
?gclid=cpk9wfp1lzmcfradewodqvqftg
?gclid=cnjnwlybm5mcfqmyewodysdrrg
?gclid=clnprjrajjmcfq42egodnjcrtq
Streamyx
Privacy Policy
?gclid=conbpdkaojmcfrgyewodlqdkqg
Samsung Singlebill
?gclid=ck M6475mjmcfrqsagoduempwa

Comments: Post a Comment



<< Home

Archives

Apr 29, 2008   Apr 30, 2008   May 1, 2008   May 2, 2008   May 3, 2008   May 6, 2008   May 7, 2008   May 8, 2008   May 9, 2008   May 10, 2008   May 11, 2008   May 12, 2008   May 13, 2008   May 15, 2008   May 16, 2008   May 18, 2008   May 19, 2008   May 20, 2008   May 21, 2008   May 22, 2008   May 23, 2008   May 24, 2008   May 29, 2008   May 30, 2008   Jun 1, 2008   Jun 2, 2008   Jun 3, 2008   Jun 4, 2008   Jun 5, 2008   Jun 6, 2008   Jun 8, 2008   Jun 9, 2008   Jun 10, 2008  

This page is powered by Blogger. Isn't yours?