Windows Foundation

Infrastructure Blues

Why's the network running so slow? Where's the bottleneck? Is it hardware, software or a user? You can find answers quickly if you know your network's infrastructure.


As a server administrator you probably spend your day taking care of and worrying about the server farm. There are logs to read, applications to attend to, MP3s to delete off of users' home drives and so forthyou're a busy person.

So what happens when you come in one sunny Monday morning and you're practically attacked by users who want to know why the network is so slow? You've got servers to attend to and how should you know what's going on with the network? But you can't tell your users that, especially if you're the lone administrator in your company. You have to dig in and figure out what the problem is.

You might at first suspect that somebody's trying to download a huge file from the Internet or perhaps running a big report from an application server. Upon checking, you find that there aren't a lot of users attached to the networkthere can't be because the network is so slow!

This is where knowledge of the infrastructure comes in handy. Your problem may not lie at all with the servers or users; it may be in the guts of the network: the cabling (and associated wiring closets), protocols, switches and hubs. It's key that administrators understand, almost intimately so, the nature of the infrastructure because so many problems occur there. If you're not familiar with the infrastructure's makeup, you may overlook a very important place where you can find clues in your troubleshooting efforts.

MCSA/MCSE Candidates, Listen Up

Because of the importance of infrastructure the CompTia A+ and Network+ tests are good to have under your belt. Both of these tests will help you understand how a PC with a network interface card (NIC) inside it connects to the network world. The CompTIA tests are designed to be agnostic toward any one personal or server operating system (OS) or hardware component, so they're a great way of getting an overall flavor for what PCs, networks and servers are all about without having to delve into any one company's offerings. See www.comptia.com for more details regarding their tests.

Microsoft indirectly sanctions these exams, allowing MCSA candidates who have passed a combination of CompTIA's A+/Network+ exam to waive the MCSA track's elective requirement. See www.microsoft.com/trainingcert/

Microsoft used to offer a test called Networking Essentials but has discontinued that requirement when it revamped the MCSE program. In its place you might pursue the CompTIA offerings. While you can learn some things from Exam 70-221, Designing a Microsoft Windows 2000 Network Infrastructure, that exam assumes that you already know something about network infrastructures to start with (and it's brutal). Windows 2000 has heavily raised the bar, in terms of the underlying infrastructurebandwidth requirements, Windows 2000 service offerings, and so forth.

So, let's take a look at the things involved in your network's infrastructure in the next few months, understanding that you'll have to take a closer look at each in order to really feel comfortable with the network's overall operation. I'll also give you some pointers on what tests to study for in order to augment your overall administrative experience (and make yourself even more employable than you already are).

First off, let's look at bandwidth issues and how the cables your network uses can affect network performance. In the coming months, we'll discuss the differences between hubs, switches and routers, and the types of internetworking protocols that you're likely to encounter.

Bandwidth of The Network
Most electronics engineers disdain the word "speed" when used to describe the bandwidth you have on your network, but that's what it really comes down tothe amount of bits per second that can be transmitted through the cabling, which is really the network's data transfer rate. (Never, ever refer to the network's data transfer rate as speed when you're talking to people who work with infrastructures all the time. They'll lecture you for an hour on how it's not speed, it's the transfer rate. As for me, I'm perfectly fine with the use of the word speed as I think it adequately describes what's really going on.) Older networks function at 10 million bits per second (Mbps10Base-T), newer networks run at 100 Mbps (100 Base-T) and the newest networks operate at 1000 Mbps (gigabit Ethernet"Gig-E" or 1000Base-T). Note that all of the data transfer rates given above are based upon Ethernettoday's king of network architecture. There are other data transfer rates that are used when you consider token ring, an older and at one time very popular architecture.

The CompTIA Network+ test will delve into all of the various ways that a network can be wired (its topology) as well as the assortment of data transfer rates that can be derived from today's network architectures. Ethernet is a network architecture, and you can wire it in a bus or star topology (in a line, one PC after another, or all PC cables directed to a single hub or switch, respectively; see Table 1). Token ring is also a network architecture and is always wired in a ring. The Institute of Electrical and Electronics Engineers (IEEE) alluded to in Table 1 is an international body that regulates standards regarding network architectures and data transfer rates. See http://computer.org for more info.

Architecture Topology Data Transfer Rate IEEE Designation
Ethernet Star 10 Mb/s
100Mb/s
1000Mb/s
802.3 10Base-T
802.3u 100Base-T
802.3ab 1000Base-T
  Bus 10 Mb/s
100Mb/s
1000Mb/s
802.3 10Base-T
802.3u 100Base-T
802.3ab 1000Base-T
Token Ring Ring 4 Mb/s
16Mb/s
100Mb/s

802.5
802.5
802.5

Table 1. A comparison of architectures, topologies and data transfer rates.

Cabling
Network bandwidth is predicated largely on the cabling you have installed. Category 4 (Cat 4) wiring cannot play in the 100 Mb/s sandboxyou have to have Category 5 (Cat 5) wiring to make this happen. While some well-installed Cat 5 wiring installations might work in the gigabit environment, Category 6 cabling is generally recommended. So the bandwidth you desire for your network depends on the health of your building's cabling. Re-cabling a building can be an incredibly expensive proposition and one that you should never undertake yourself unless you've been trained in it. Leave the re-cabling job to professional cabling companies.

There are three kinds of cabling you have to consider:

  • Cable that runs from your data center to the wall jack in a user's office
  • Cable that runs between wiring closets (we'll discuss wiring closets in more detail later on in this article)
  • Cable that connects PCs or servers to the wiring infrastructure

Generally, the cable that runs through your ceilings or floors to user's offices and is then attached to their wall jacks is copper, has a plenum to keep it from giving off harmful fumes in the event of a fire, and is stranded (wound). Cables that run from PCs or servers and somehow connect to the wiring infrastructure are typically solid wire (or should be if they're not) instead of stranded and don't necessarily need to have a plenum. (Stranded cable has a tendency to weaken at points where it is excessively bent so solid cable makes more sense in cables that run from the user's NIC to the wall jack. Stranded cable is less expensive than solid, so it makes sense to use it when running it through a ceiling breezeway.) Cables that run from one wiring closet to another can be fiber-optic or copper (and typically would be stranded if copper).

Take a look at a cable next time you're in a wiring closet. It will be clearly marked as to the category it is rated for. Fiber-optic cable (which is not marked with a category) is always orange, flat and easy to recognize. Ethernet cable comes in many different colors, is thinner and more rounded than Ethernet cable and must be labeled as to the category it supports.

When studying for a network exam, you'll likely be tested on the different kinds of topologies and cables that you might run into. While it's good to study these kinds of cabling environments for the test, it will be an extremely rare case in which you run into anything other than CAT5 or CAT6 in an Ethernet environment. So strong is the Ethernet standard and so adopted is 100Base-T with CAT5 cabling, that you should consider it a lucky break to work on old cable such as coaxial 10Base-2 or Token Ring with its Multiple Access Units (MAUs) and other funky architectures. (OK then, maybe not. Consider it lucky that you have Ethernet instead. And content yourself to study other cabling for the network examsplanning on never running into any other kind…seriously.)

Every once in a great while, you'll run into an instance where you're having problems with a user trying to work on a 100Base-T network and exhaust all troubleshooting avenues, only to find out that the user has an old CAT4 cable. It's easy to tellthe cable's well marked. Sub it out and the user is on his way.

Why are companies so enamored with Ethernet and CAT5 or CAT6 cable? Because it's easy to understand, standardized, quick and painless. It works! Where will you run into trouble? We've already alluded to the CAT4 cable trying to work on a 100Base-T network issue. But you can also run into badly crimped connectors, stranded cable that's come loose, cable, jacks or receptacles that have failed, Ethernet cable length violations and cables that are not properly seated in their receptacles. You can purchase cable troubleshooting devices that will assist you in troubleshooting cabling problems, but a watchful eye will be of great help for administrators who don't have bucks to throw at expensive testing gear.

Cabling cannot run parallel to electrical wires because you'll get a crosstalk problem. Cables cannot be longer than the Ethernet length rule or you'll run into performance or dropped data issues. Cables shouldn't really come out of a switch in the IDF or MDF and then run into a hub in a user's office off of which the user has subsequently hung a bunch of devices. That's a great way to generate performance issues. Better to run multiple jacks to this user in such a circumstance. Poorly terminated or mistreated (bent at a 90 degree angle to make way for a desk) cables will give you fits.

What About Wireless?
Well, it's a big topic. Suffice to say for now that you'll have to set up at least one Wireless Access Point (WAP) where the wireless devices can connect to the network and that typical building stuff like concrete, steel girders and other things will severely limit a device's ability to get to its WAP, in spite of the distances the vendor says the wireless device can go. Test, test, test! One of my server administrators set up a WAP and then took her Compaq iPAQ around the building to see how well she could connect. In some cases she couldn't hit the WAP at all; in others she got the full 100 yards the literature said she could.

Wiring Closets—MDF and IDF
We give an interesting name to the wiring closets in your building. The main wiring closet, typically where the routers, telephone gear, and perhaps even some of your servers are located, is called the Main Data Facility (MDF). Most MDF have a wiring patch panel in themon the one side are jacks, on the other are cables leading from the patch panel out into the building to the various offices. There are Ethernet and fiber patch panels (token-ring requires a little different hardware configuration). The patch panels and associated wiring running to offices are generally installed by cabling experts. I've run into network performance issues that involved shoddy crimping of the connections between the patch panel and the wiring running to office wall jacks.

You'll run an Ethernet patch cable from your servers, routers and other gear to a hub or switch (I cover hubs and switches in more depth next time) or directly to the patch panel itself. Each patch panel jack is numbered, giving you the ability to trace where your wires are going. If you have a feeder wire going to another wiring closet somewhere in the building, we call that other closet an Intermediate Data Facility (IDF). In many configurations, the wire connecting the two closets is fiber-optic because it was at one time able to handle higher speeds than conventional Ethernet cabling. However, today's Gig-E standards have allowed Ethernet to play in the high-speed arena. Figure 1 shows an MDF connected to an IDF with a fiber-optic cable.

Typical MDF/IDF configuration
Figure 1. A typical MDF/IDF configuration. This shows two wiring closets in different sections of your building. Server A is connected by Ethernet cable to the backbone, Server B by fiber. Note that the same switch can host a Cat 5 and a fiber-optic connection as well as different data transfer rates.

Note that the patch panel cables can go to different parts of a buildingyou might have some offices fed off of the patch panel in the IDF, while others are fed off of the patch panel in the MDF. Fiber patch panels are easy to differentiate from standard cable panels. There's an orange jumper cable running from the fiber patch panel to a switch. It's possible to equip servers with fiber-optic NICs and connect them directly to the fiber-optic ports on a switch. The patch panels, switches and associated cabling between the IDF and MDF comprise the network's backbone. Any time a server is hooked directly to a switch that is high-speed and communicates directly with the MDF and IDF, it is said to be hooked to the backbone. Servers can hook to the backbone through conventional Ethernet or fiber-optic cabling.

In Figure 2 you can see that you have two wiring closets (I call them closets and they might well be closet-sized, but they could also be full-sized rooms) each of which has two patch panels, for fiber-optic (an eight-port panel) and one for Ethernet (a 16-port panel) as well as a switch in each closet. The MDF and IDF are connected by a fiber-optic cable running between them. One server, Server A, is connected to the backbone by an Ethernet cable, the other, Server B, is connected by a fiber-optic cable. Because they're both connected to the backbone, you've reduced slightly your chances for failure (because you don't have a wall jack introduced to the system) and you can run the servers at a higher data transfer rate than the workstations.

Connected via fiber
Figure 2. Server A is connected to an Ethernet port on a switch in the MDF. Server B is connected to a fiber port on the same switch. Server C is connected to a fiber port on a switch in the IDF. The two switches in the IDF are connected to one another by a jumper cable on the uplink ports. Incoming wiring from offices passes into the patch panel. Patch cables then run from the patch panel jacks to the switches. In this way users can communicate with the servers. Switch ports can be mixed and set for various data transfer rates. Servers are connected to the backbone.

I should make a few points before we continue this discussion. Fiber-optic cable uses strands of glasstwo strands making up a pair. Fiber is typically installed and you're charged by the number of pairs that are put in. For redundancy's sake, you'll want a second pair of fiber to run between your MDF and IDF. That way, in case the first pair fails, you can very easily and simply snap in the replacement pair, and your network would be running again. You should follow the same redundancy procedures for Ethernet cabling running at Gig-E speeds. If your backbone cable connecting the MDF and IDF goes out, then users cannot communicate with one another or the servers. You always want to consider redundant links between your MDF and IDF backbone.

You can have more than one IDF, so you can understand how the design might get a little tricky. One company I worked for had 16 IDFs connected to the data center MDF. Generally, the MDF is the place where the servers live and all of the building's network wiring terminates. It's Grand Central Station, if you will. You will typically find that your WAN connectionsT1 Frame Relay, etc.also terminate inside this room. A WAN circuit (or any telephony circuit, for that matter) termination point in a building is called the demarcation point and is referred to by internetworkers (router and WAN circuit folks) as the "demarc" (and which I've also referred to in some of my books as the "d-mark"). As a general rule of thumb, your routers are also located inside the MDF.

Finally, when considering the MDF and IDF backbone, you'll want to take into account the aggregate bandwidth of the backbone. What I mean by that is if you have a 1000Base-T backbone (1000 Mbps) and you attach a lot of servers at 1000Base-T and dozens or hundreds of users at 10Base-T or 100Base-T, it's possible that you could saturate the network with the collective bandwidth being used by all the computers. You should note that it's only remotely possible because it's very seldom that a user's workstation will take up anywhere near the bandwidth the NIC is capable of putting out, nor do servers typically operate near the upper end of Gig-E bandwidth limitations.

However, in large client/server installations that generate a lot of activity and reports, or where large files are regularly dragged across the wire (as in the case of backups, for example), or some distance learning is going on and users are downloading video from the Web, it's possible for the backbone to get saturated and for the network to slow to a crawl. (The three instances referred to in the previous sentence aren't the only ones which might introduce excessive aggregate bandwidthjust examples.) I've seen this happen time and time again. If you suspect an aggregate bandwidth saturation issue have the network sniffed by a professional during the times when you suspect that this is happening. It's the only good way to figure out which stations are causing the problem and to whom they're connecting.

I hope I haven't confused you and instead have helped you with all of the dialog regarding MDFs, IDFs, cabling and so forth. It is key that you understand how your cabling, MDF and IDF hook together because a poor design or shoddy workmanship can be very effective in slowing a network down (and make problems very difficult to find). If you have a problem and you've considered all things associated with the server software (including TCP/IPa simple protocol suite but one that can make so much trouble) and hardware, then your next stop is the infrastructure.

Above all, don't be afraid of the infrastructure. If you've ever set up a tent, wired a basement, worked a maze in a book, done macramethen you have the basic ideas needed to understand a building's cabling. It's all about lines connecting to one thing, leading somewhere and then connecting to another.

Next time we'll talk about the devices that allow users to connect to the IDF and MDF and to communicate with one another and with serversswitches and hubs. It's fun interesting stuff, so stay tuned.

comments powered by Disqus
Most   Popular