For those moving from 1155/1156/1366 you might be a little surprised at just how much bigger the new CPUs are; it’s little wonder that they require a twin-lever clamping system on the motherboard. Customer’s old CPU on the right (Intel i7-870, Socket 1156) and their new CPU on the left (i7-3820, Socket 2011). That’s a lot of extra CPU real-estate for all those extra pins – almost 900 more. It’s quite a bit heavier in the hand, too.
If you’re running an OpenIndiana/OpenSolaris fileserver chances are you’ll need a static IP so that you’ll always know where to find it on the network. There is more than one way of doing this but by far the easiest is using NWAM, or network auto magic. Do the following (the # at the beginning means you need to have superuser permissions for this, and any line without bolded text at the end means you just hit enter):
# nwamcfg nwamcfg> create ncp lan nwamcfg:ncp:lan> create ncu phys e1
000g 0Created ncu ‘e1000g0’. Walking properties … activation-mode (manual) [manual|prioritized]> prioritized enabled (true) [true|false]> priority-group> 0priority-mode [exclusive|shared|all]> shared link-mac-addr> link-autopush> link-mtu> nwamcfg:ncp:lan:ncu:e1000g0> end Committed changes nwamcfg:ncp:lan> create ncu ip e1 000g 0Created ncu ‘e1 000g 0‘. Walking properties … enabled (true) [true|false]> ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4 ipv4-addrsrc (dhcp) [dhcp|static]> static ipv4-addr> 1 0. 0.1.2 ipv4-default-route> 1 0. 0.1.1 nwamcfg:ncp:CorpNet:ncu:e1 000g 0> end Committed changes nwamcfg:ncp:lan> end nwamcfg> end # nwamadm enable -p ncp lan Voila! You have now set a static ipv4 IP in OpenIndiana/OpenSolaris. If you would like to set an ipv6 address instead, or both, select ipv6 or nothing when prompted for the ip-version. Replace the 10.1.1.2 and gateway (10.1.1.1) with whatever IP addresses match your own network.
We had a look at a pool today which had a raidz1 vdev consisting of 2x2TB drives and 1x1TB drive. The 1TB drive failed and was replaced with a 2TB drive; however, on resilver the pool didn’t expand. Exporting and importing didn’t work, so we tried: # zpool online -e [poolname] [disk01] [disk02] [disk03] …and the available capacity increased as it should.
If you’re overclocking or just unsure whether your system is performing properly a stability test can be quite helpful to determine just what’s going on. One of the most widely used and effective tests is Prime95; it’s lightweight, does a good job of stressing your CPU and RAM and will happily run for as long as you’ll let it. You can download it from here. Choose whichever version matches your operating system, download and install. When you open it you will asked whether you’d like to join GIMPS or if you’re just stress testing; select “Just Stress Testing”. You will then be greeted by a window which looks like this (click twice to expand): Choose Blend, click OK. The program detects how many cores and threads your CPU has and should automatically select the right number of torture tests to run. Once you have clicked OK you will see a rather cramped screen which looks like so: To make this into a more sensible window, click Window -> Merge All Workers, and if you want it all in the one window then click Window -> Merge Main & Comms & Workers. The testing has now started; you will probably hear your CPU fan ramp up shortly to keep your CPU cool as it works hard. Each of the worker threads will report in when it reaches a checkpoint successfully; if there’s an error it will tell you or your system will crash out. Since it is very CPU intensive it will generate a lot of heat; now is a good time to keep an eye on your CPU temperatures to see how they are faring under load. A good program for this for Windows systems is Coretemp, found here. Be warned, though, it does come with a couple of installer toolbars which you will probably want to decline installing. Coretemp shows you how hot your CPU is at any given point in time; when open it looks like this: What temperatures are cause for concern will depend on your CPU; if you see anything over 80 degrees it’s a cause for concern for most chips, though, and anything over 90 requires immediate attention. High temperatures can damage your CPU and other components, so if you see anything that’s out of the ordinary stop Prime95 (click Test->Stop or Test->Exit) and your temperatures should lower; then find out why it was so high! Laptops and the like have much less ability to move heat compared to their desktop counterparts, so be prepared to see higher temperatures there. If you like being able to monitor your CPU temperatures you can start Coretemp minimised to your system tray; I typically have mine showing both my highest core temperature and my system clock speed like so: 34 degrees and 1.6ghz (the speed a 2700K drops to once it’s idle). Back to the stability testing – let Prime95 run for at least 6 hours, preferably 24hrs – we have had both overclocked and faulty systems which have taken over 24 hours of stability testing before they start throwing up errors. How long you let it run is a matter of taste and how important maximum stability is to you. If it throws up an error it’s time to start looking at why; if it’s overclocked you need to change your settings, if it’s not then you need to start isolating hardware to try to find the cause. If it does cause your system to crash you’ll usually get a blue screen error in my experience, so it should be pretty obvious when you do find that you have issues 🙂 Found this writeup useful? Or do you use a different program for stress testing? Let us know in the comments!
Something to think about – we had a couple of models from the same batch die within a day of each other recently, and while that may be coincidence there may also have been something about that batch that made them die under certain usage conditions or at a certain age. Now, if they had been a mirrored pair the mirror would have been lost and we’d have had to go to a backup to restore the data. Fortunately we had spread that batch out across several arrays so the failures didn’t result in any downtime and the drives have been RMA’d as per usual. If you’re buying several drives at once this is something to keep in mind – try to source them from different batches if possible or spread the drives out so that if they do all fail in a short space you won’t lose anything. We certainly don’t keep our backups on the same media as our main arrays just in case something catastrophic happens to a batch of disks. A good rule of thumb for valuable data is 3, 2, 1: 3 copies, 2 types of media, 1 offsite. If your data is important – family photos, important documents, anything that would be hard to replace or irreplaceable – remember to back it up. Even a drive or thumbdrive at a family member or friend’s house could save you if your home or work gets burgled.
I was asked how to add a disk into a ZFS mirror today; this is an easy one: # zpool attach [poolname] [existing disk] [new disk] …and done! The pool will begin to resilver with the new disk as part of the existing mirror. You might want to do this to replace a part of the mirror which has been removed or is faulty, or you might want to expand a 2-way mirror to a 3-way mirror (longer MTBF for the 3-way, beyond that it’s not worth it).
For those of you with the Asus P8B WS-based workstations, there is a new BIOS available which enables use of the new Ivy Bridge Xeons (#2009). You can download the BIOS update here.
Asus have updated the bios of this workstation board to support Ivy Bridge CPUs; please note that you’ll need a Sandy Bridge CPU installed to do this update so please make sure if you are upgrading that you do so before installing a new CPU. You can find the update (#2009) here.
This is a question we get asked reasonably often – what is the right SAS cable length for a Norco 4220/4224 chassis while using IBM M1015 HBAs? 50cm: Doable but puts a great deal of strain on the connectors, particularly on the card end. 60cm: Enough with some slack and no excessive pressure on the connectors. 75cm: Enough with a reasonable amount of slack; will likely require a few cable ties to prevent obstructing airflow if you have more than one. 1m: Too much slack. We tested the lengths with the HBAs in each of the available slots, from nearest to the CPU to farthest away. Were you to have raid cards/HBAs with end ports rather than upwardly-facing ports you would be able to get away with the 50cm cables without issue.