So I have to get them out. Homo as ... Look at my demented sideways tooth !

Left Side of my Mouth

Right Side of my Mouth
So off to day surgery a comming month soon !! 🙁
So I have to get them out. Homo as ... Look at my demented sideways tooth !

Left Side of my Mouth

Right Side of my Mouth
So off to day surgery a comming month soon !! 🙁

Only in Australia - Remember call Todd!
Dam it's Cool ! Watch:
[youtube=http://www.youtube.com/watch?v=UJMepmfOgU0&w=425&h=344]
Just a reminder that this Valentines day is actually a special day … 😉
On Sat Feb 14 2009 10:31:30 GMT+1100 (AUS Eastern Daylight Time) epoch time will read:
1234567890.123
So, at work I am lucky enough to get to play with 3 Sun x4500 x86_64 Thumper Systems. You may be sitting there and saying big deal, I say it's a lot of disk and sweet sexy Sun hardware.
I have posted this due to the hard time I found trying to find information on linking the Network Interfaces and using Jumbo Frames to maximise your network throughput from your x4500. I have a x4500, using jumbo frames and has two Gig (e1000g0) interfaces running Solaris 10u6 with a rpool and a big fat data pool I call cesspool.
I have shares exported by nfs. Below I will detail my conf and what I have found to be the best performing NFS mounting options from clients. I did try to do this when I had the x4500 on 10u5, but had difficulties. Hosts that were not on the same switch as the device were having speed issues with NFS. I contacted Sun and got some things to try, along with things I tried and below is the end conf I have found to work best, please let me know if you have found better results or success with different configurations.
Please note, I am now running Solaris 10u6, and apparently there was a bug with 10u5 and the e1000g driver.
Host (Solaris) Config: On Solaris two things must be done to enable jumbo frames. Please ensure the switch is configured before enabiling the host:
HOSTNAME=god INTERFACE=e1000g0 SIZE=9000 1
ifconfig ${INTERFACE} mtu ${SIZE}${HOSTNAME} mtu ${SIZE}Switch Config:
system mtu jumbo 9000 (this gets hidden in the IOS defaults)
system mtu routing 1500 (this is an auto insert command by IOS)
Show system mtu Output:
System MTU size is 1500 bytes System Jumbo MTU size is 9000 bytes Routing MTU size is 1500 bytes
Remember to copy run start once happy with config 🙂
Host (Solaris) Config:
I wrote a script to apply. This script asumes you already have /etc/defaultrouter, /etc/netmasks, /etc/resolv.conf and /etc/nsswitch.conf all set correctly.
Here is the script I used to apply the conf:
#!/usr/bin/bash
# Create Link aggr on plumper
# Ether Channel on Swith Ports 2 on each 3750 - 20081223
# Do I want these ? # -l = LCAP mode - active, passive or disabled # -T time - LCAP Timer ...
ifconfig e1000g0 unplumb ifconfig e1000g1 unplumb
# Sun's Suggestion
dladm create-aggr -P L4 -l active -d e1000g0 -d e1000g1 1
# Move hostname file
mv /etc/hostname.e1000g0 /etc/hostname.aggr1
# Check Link
dladm show-aggr 1
# Set device IP
# Can set MTU here if jumbo enabled
ifconfig aggr1 plumb x.x.x.x up
# Show me devs / links so I can watch
dladm show-dev -s -i 2
Switch Config:
# = Insert Integer Configure a Port Group: - interface Port-channel#
- switchport access vlan #
- switchport mode access
- exit
- port-channel load-balance src-dst-ip
Please configure the ports you want in the channel (4 max) required as following: # = Insert Integer - config term
Remember to copy run start once happy with config 🙂
This was another silly little mistake I was doing, I was turning sharenfs=on with the ZFS file systems I wished to share and then trying to apply the shares properties using share command and adding entries to the sharetab manually.
With ZFS tho, all your NFS options should be applied to the sharenfs attribute on the ZFS filesystem, as the following example:
zfs set sharenfs=ro,rw=god.cooperlees.com,root=god.cooperlees.com
These arguments get pased to \'share\' via ZFS @ boot time.
Most of my clients (that I have tuned) are Linux boxes, running Scientific Linux 5.2 (a Redhat deriviative - similiar to CentOS). I have found once jumbo frames and aggregated interfaces are involved TCP performs better.
By default, tcp is used on modern Linux nfs clients, but on older Linux, e.g. Irix etc. UDP is. Once you try to move a large amount of data will not work if the host has a different MTU to the file server. (With old OS's like this running you can tell I work @ a Scientific research facility).
Here are some examples of my mount options in /etc/fstab on these boxes:
Modern Linux Machines: (CentOS 5, Scientific Linux 5):
god.cooperlees.com:/cesspool/home /home nfs defaults,bg,intr,hard,noacl 0 0
Old Linux Machines: (Redhat 7 etc.)
god.cooperlees.com:/cesspool/home /home nfs defaults,bg,intr,hard,tcp 0 0
Irix 6.5 (yuck - MIPS):
god.cooperlees.com:/cesspool/home /home nfs defaults,rw,sync,proto=tcp
-No acl and once again UDP ...
Well hello all,
It sure has been awhile since I have blogged on the I-R-Coops Blog. This year I am going to try and share more funny stuff and tech crap (i.e. Things I work out or things I think is cool) mch more regularly. Was interesting to see the amount of visits I actually got back in August (when I was in North America), got up to 800 vists. More than I thought I would of got.
2008 was a great year. Visiting North America was a blast. The shagadoor is still going well and my job is good. Nothing to complain about from me. So lets get into 2009 and see what it brings ... Speak to you all again soon ...