Intel’s network cards are popular due to their speed and reliability, which is often greater than the onboard chips in devices/motherboards. The Intel Gigabit CT Desktop Network Adapter is a PCI-E x1 add-in card with a single gigabit port, usually selling for around $30AU. It is low-profile and should come with a low-profile bracket – handy for thin HTPCs or servers – and is passively cooled as you would expect. It auto-negotiates – so you don’t need to worry about crossover cables – and is PCI-E v.1.1, which supplies more than enough bandwidth for a single gigabit port and should work fine in V2 and V3 slots. It is also supposed to be compatible with x1, x4, x8 and x16 slots. The network controller is Intel’s 82574L – a design released in 2008, with an expected discontinuance of 2018 – Intel certainly expect to get a lot of mileage out of that chip! The 82574L has a TDP of below a single watt, so this is going to be quite a power-efficient add-in card. Intel state that the typical power consumption is in the range of 1.9W for the entire card. Driver support is excellent across virtually all operating systems – it’s plug and play with many Linux distros and works perfectly well with the provided drivers in Windows machines. It also has support for teaming/bonding/link aggregation and 9K jumbo frames. Physically the card is 11.92cm long and 5.53cm wide. In our tests the card managed an impressive average of approx. 950Mbit/s – very close to the theoretical maximum throughput of a gigabit line. If you are in the market for a reliable, fast PCI-Express network card and only need a single port this card is well worth a look – between the features, low power usage, low profile option and driver support it’s an excellent buy for the price.
If you want to see how much traffic is passing through your network port there’s a handy tool called vnstat which will tally the amount of data passing through. You can install it with:
sudo apt-get install vnstatIt will usually add the databases and network ports automatically like so: If it doesn’t and gives you an error you can create the database(s) with:
sudo vnstat -u -i eth0If you have multiple network cards/ports you can add those in, too:
vnstat -u -i eth1 vnstat -u -i eth2 …etcIf it couldn’t create the databases you can start it with:
sudo /etc/init.d/vnstat startIf you need to change the maximum bandwidth from 100Mb you can edit the file:
/etc/vnstat.confScroll down until you see the following:
# maximum bandwidth (Mbit) for all interfaces, 0 = disable feature # (unless interface specific limit is given) MaxBandwidth 100and make MaxBandwidth the figure you require (e.g. 1000). If you make a change restart vnstat with:
/etc/init.d/vnstat restartYou can now see how much traffic has come through the NIC since vnstat started recording – at first it probably won’t be much (if any), but as it adds up you can check it with:
vnstatThe output should look like: You can watch how much traffic is flowing through in real-time by running:
vnstat -i eth0 -lThis will give you a screen showing you the current traffic: You can end this with CTRL+C, which shows you a summary screen: You can get an hourly summary with:
vnstat -i eth0 -hDaily summary with:
vnstat -i eth0 -dMonthly summary with:
vnstat -i eth0 -mThis is a really handy way of keeping track of your network traffic – whether it’s out of curiosity, wanting to know how much stress your network is under or looking for a bottleneck this can be quite a valuable tool.
This one came up today with a customer who was extending her Wireless N home wifi network with a wireless access point. The access point was reporting a Wireless N connection at a speed of 54Mb/s rather than 150 or 300Mb/s, and transfer speeds for anything connected to the access point were only 2-3MB/s (not unusual for 54Mb/s). The problem in this case ended up being that the access point was set to TKIP encryption rather than AES – using either WEP or WPA-TKIP may drop speeds back to Wireless G speed, which is 54Mb/s max. Switching the encryption to AES put speeds back to what they should be. This won’t always be the solution but it’s worth checking if you’re seeing that issue.
Some customers have been having DNS issues after setting a static IP on Ubuntu 12.04.1 where the server is no longer picking up the DNS settings as it was before; this can be easily fixed by adding the following to /etc/network/interfaces after the eth0 entry:
dns-nameservers [ip.of.your.router]e.g., for a modem/router that’s 10.1.1.1 on your local network, your /etc/network/interfaces file might look like:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.1.1.50 netmask 255.255.255.0 gateway 10.1.1.1 broadcast 10.1.1.255 dns-nameservers 10.1.1.1Restart your network with:
sudo /etc/init.d/networking restart…then try pinging Google or something similar and you should have success 🙂 It’s generally not advisable for Australians to use nameservers located elsewhere, e.g. Google’s public DNS’ 18.104.22.168 or 22.214.171.124 – some things which are unmetered by your ISP may be metered if you do so.
There’s not a great deal to say about these cards apart from that they allow you some crazy network speeds, if you have the disk speed to keep up. They can certainly alleviate network bottlenecks if gigabit is holding you back! This particular card has a fan to keep the chipset cool; it’s not going to be heard in a server room but if your workstation is quiet the high-pitched whine is probably going to be audible. The card does get reasonably hot, particularly if you’re making good use of it’s capabilities – make sure you have enough airflow in the chassis to keep the card cool. Keep in mind it’s assumed that these cards will be used in an environment where there’s at least 200 linear feet per minute of airflow passing over them. 10-gigabit cards are coming down in price quite significantly, though switches are still out of reach of most enthusiasts/small businesses. Watch this space, however, as 10GBe connections are making their way into high-end server boards more regularly and that will slowly filter down to the consumer level. SSD arrays becoming more commonplace will only help with that, as will the new 12gb/s cards from LSI – it’s hard to make use of all that bandwidth if you’re piping it over gigabit!