This is a question we get asked reasonably often – what is the right SAS cable length for a Norco 4220/4224 chassis while using IBM M1015 HBAs? 50cm: Doable but puts a great deal of strain on the connectors, particularly on the card end. 60cm: Enough with some slack and no excessive pressure on the connectors. 75cm: Enough with a reasonable amount of slack; will likely require a few cable ties to prevent obstructing airflow if you have more than one. 1m: Too much slack. We tested the lengths with the HBAs in each of the available slots, from nearest to the CPU to farthest away. Were you to have raid cards/HBAs with end ports rather than upwardly-facing ports you would be able to get away with the 50cm cables without issue.
Intel have released some information about ECC in their networking cards; Some of the older chips like the 82571 – found in cards like the Pro/1000 PT single/dual/quad port NICs – actually do have error correction on the in-band traffic, which is good news. You can see that the latest generation (i.e. i-350, i-540 etc.) have ECC on both the in-band and out-of-band (management) traffic – that’s on top of benefits like lower TDP and CPU consumption. Of note is that the more common chips for onboard Intel Gigabit LAN ports – e.g. 82574L – doesn’t feature ECC or parity at all on either in-band or out-of-band traffic. For those who are trying to create the most stable, secure system possible this is a consideration which may prompt you to look at some of the newer network cards which do have those features. I would think that for the majority of users it’s unlikely to have a significant impact on your data long-term but if you’re buying new, it pays to have all the facts. Also, for those of us who are paranoid about data corruption, well, there’s now one more place you can have ECC for your peace of mind… Source.
The AMD Radeon HD 6450 is a low-end graphics card that’s perfectly suited to basic graphics duties such as you’ll need on a server or workstation board that lacks inbuilt graphics capabilities (on-CPU or a discrete chip). It’s flexible in output – VGA, HDMI and DVI – is more than powerful enough for everyday use or even production 2D Photoshop work – is silent thanks to the passive cooling design and only draws somewhere between ~6-9W on idle and ~27W under load. For systems which are generally headless you should probably be looking at a server board which incorporates onboard graphics, but if you have a setup which requires a graphics card that will go largely unused the low idle power draw is quite appealing here. It does require a x16 physical slot but works fine in slots which are lesser electrically, e.g. x8 or below. The card doesn’t run particularly hot at idle so the passive cooling system should fit in with most setups without causing significant additional heat load to nearby components. It’s also worth a look-in if you have hit the maximum number of monitors your current setup requires and don’t plan on gaming or doing 3D work on the additional monitor – no point in spending up big on a noisy gaming card for that purpose. At around AU$40 this card represents good value for someone who doesn’t need a great deal of graphics performance but wants to minimise the noise and power consumption of their system, and it’s our current go-to card for this sort of requirement.
One of the more common quad-port network cards that pops up online is the Pro/1000 PT. This is a 2006 Intel design – discontinued in 2009 – which generally seems to be priced anywhere from AU$120-160 as a server pull. It uses two Intel 82571 chips, each controlling two gigabit ports, and is PCI-E 1.0 x4. The chips are widely supported and work out of the box with most operating systems and hypervisors. The card is also compatible with PCI-E 2.0 and 3.0 lanes, and there are both low-profile and full-height variants. One of the appealing features these cards have – beyond four gigabit ports, of course – is teaming (aka trunking or bonding, falling under IEEE spec 802.3ad). This allows you to create links between the card and other devices which span more than one gigabit port, allowing you anywhere from a single gigabit connection to using all four to pipe your traffic through. Note that you won’t get 4Gb/s in a single transfer that way – it will be four gigabit pipes rather than one 4Gb pipe. This is particularly useful if the network traffic going into your server is exceeding a single gigabit link and can result in a noticeable improvement in network performance – so long as your switch supports 802.3ad too. The card’s TDP is 12.1W and the chip heatsinks get reasonably hot to the touch if you don’t keep adequate airflow over the card during operation; they’re cards designed for use in high-airflow server environments so if you’re putting them in a quiet home server you may want to consider how much airflow the cards will get. The controllers can be passed through for hypervisors such as ESXi; keep in mind that you’re passing through one or both controllers rather than the ethernet ports themselves, so you can only pass the ports through in pairs. This card represents quite good value for the home/SMB user who needs more ports, either to separate network traffic or to alleviate bandwidth congestion. The wide compatibility is also an advantage for those using motherboards without existing Intel network controllers and it should work out of the box with just about any modern OS, including ESXi. At about a third of the cost of one of the new variants (i350-T4) these are definitely a card to consider if you’re looking for an Intel NIC but don’t want to buy new.