Solaris 10 + Jumbo Frames + Link Aggregation with Cisco 3750 Switch + NFS Exporting / Mounting

So, at work I am lucky enough to get to play with 3 Sun x4500 x86_64 Thumper Systems. You may be sitting there and saying big deal, I say it's a lot of disk and sweet sexy Sun hardware.

The Sun x4500 Thumper
The Sun x4500 Thumper

I have posted this due to the hard time I found trying to find information on linking the Network Interfaces and using Jumbo Frames to maximise your network throughput from your x4500.

I have a x4500, using jumbo frames and has two Gig (e1000g0) interfaces running Solaris 10u6 with a rpool and a big fat data pool I call cesspool. I have shares exported by nfs. Below I will detail my conf and what I have found to be the best performing NFS mounting options from clients.

I did try to do this when I had the x4500 on 10u5, but had difficulties. Hosts that were not on the same switch as the device were having speed issues with NFS. I contacted Sun and got some things to try, along with things I tried and below is the end conf I have found to work best, please let me know if you have found better results or success with different configurations. Please note, I am now running Solaris 10u6, and apparently there was a bug with 10u5 and the e1000g driver.

1) Enabiling Jumbo Frames

Host (Solaris) Config:

On Solaris two things must be done to enable jumbo frames. Please ensure the switch is configured before enabiling the host:

HOSTNAME=god
INTERFACE=e1000g0
SIZE=9000

  1. Enable it on the driver - e.g. e1000g conf = /kernel/drv/e1000g.conf
    • Reboot will be required if not already enabled
  2. Enable Jumbo Frames it with ifconfig
    • From CLI = ifconfig ${INTERFACE} mtu ${SIZE}
    • At Boot = make /etc/hostname.${INTERFACE} =
    • ${HOSTNAME} mtu ${SIZE}

    - This has been tested on both Solaris 10u6 and Opensolaris 2008.11

Switch Config:

system mtu jumbo 9000 (this gets hidden in the IOS defaults)
system mtu routing 1500 (this is an auto insert command by IOS)

Show system mtu Output:
System MTU size is 1500 bytes
System Jumbo MTU size is 9000 bytes
Routing MTU size is 1500 bytes

Remember to copy run start once happy with config 🙂

2) Enabling Aggregated Interfaces

Host (Solaris) Config:

I wrote a script to apply. This script asumes you already have /etc/defaultrouter, /etc/netmasks, /etc/resolv.conf and /etc/nsswitch.conf all setcorrectly

Here is the script I used to apply the conf:

#!/usr/bin/bash

# Create Link aggr on plumper
# Ether Channel on Swith Ports 2 on each 3750 - 20081223

# Do I want these ?
# -l = LCAP mode - active, passive or disabled
# -T time - LCAP Timer ...

ifconfig e1000g0 unplumb
ifconfig e1000g1 unplumb

# Sun's Suggestion
dladm create-aggr -P L4 -l active -d e1000g0 -d e1000g1 1

# Move hostname file
mv /etc/hostname.e1000g0 /etc/hostname.aggr1

# Check Link
dladm show-aggr 1

# Set device IP # Can set MTU here if jumbo enabled
ifconfig aggr1 plumb x.x.x.x up

# Show me devs / links so I can watch
dladm show-dev -s -i 2

Switch Config:

# = Insert Integer

Configure a Port Group:

  • interface Port-channel#
    • switchport access vlan #
    • switchport mode access
  • exit
  • port-channel load-balance src-dst-ip

Please configure the ports you want in the channel (4 max) required as following:

# = Insert Integer

  • config term
    • interface INTERFACE
      • channel-group # mode passive
      • channel-protocol lacp
      • switchport access vlan #
      • switchport mode access
      • exit
    • end
  • show run (to verify)

Remember to copy run start once happy with config 🙂

3) Nfs Sharing w/zfs

This was another silly little mistake I was doing, I was turning sharenfs=on with the ZFS file systems I wished to share and then trying to apply the shares properties using share command and adding entries to the sharetab manually. With ZFS tho, all your NFS options should be applied to the sharenfs attribute on the ZFS filesystem, as the following example:

  • zfs set sharenfs=ro,rw=god.cooperlees.com,root=god.cooperlees.com

These arguments get pased to 'share' via ZFS @ boot time.

4) NFS Mount Options

Most of my clients (that I have tuned) are Linux boxes, running Scientific Linux 5.2 (a Redhat deriviative - similiar to CentOS). I have found once jumbo frames and aggregated interfaces are involved TCP performs better. By default, tcp is used on modern Linux nfs clients, but on older Linux, Irix etc. UDP is, which, once you try to move a large amount of data will not work if the host has a different MTU to the file server. (With old OS's like this running you can tell I work @ a cientific research facility). Here are some examples of my mount options in /etc/fstab on these boxes:

Modern Linux Machines: (CentOS 5, Scientific Linux 5):
god.cooperlees.com:/cesspool/home      /home   nfs     defaults,bg,intr,hard,noacl     0 0

Old Linux Machines: (Redhat 7 etc.)
god.cooperlees.com:/cesspool/home /home          nfs     defaults,bg,intr,hard,tcp 0 0
-No mention of ACL's and UDP is default here

Irix 6.5 (yuck - MIPS):
god.cooperlees.com:/cesspool/home /home nfs defaults,rw,sync,proto=tcp
-No acl and once again UDP ...

Related Posts

Stop IPv4 Point-To-Point Addressing your Networks

IPv4 addressing on links is no longer required to route IPv4. What you say?? Yes, you can stop IPv4 addressing your point to point links with Legacy…

NAT64: Using `jool` on Ubuntu 20.04

I found that jool has very good tutorials, but all the commands to get going are hidden in these large tutorials. Here are the steps I took…

Raspberry Pi Powered Fireplace

Mr Aijay Adams and I am back making my Fireplace Internet / Smart device controllable. Now, via a very sexy Web UI, when I’m heading back to…

nftables

Are you using the latest Linux kernel firewall?. Here are some notes I’ve saved that I use and forget all the time. I plan to add to…

RPM vs OPKG Cheat Sheet

Recently in the Terragraph project I work on we changed from RPM to OPKG to removes some dependencies (e.g. perl) and make our overall image size smaller….

Ansible + Handy PyPI CLI Tools

I often use a lot of PyPI CLI tools. Here is an example of how to get them easily installed and kept up to date via Ansible…

This Post Has 7 Comments

  1. interesting article, you definitely want include this in
    edit /etc/defaults/nfs on your solaris NFS server to include at least

    NFSD_SERVERS=512 (or 1024 depending on HW)
    NFSD_LISTEN_BACKLOG=256

    cheers

  2. With 3 e1000g interfaces I am still experiencing this problem on 2 out of the 3 interfaces. My add-on PCI-E Intel card displays the correct MTU range of 1500-16298.

    The 2 built-in Intel interfaces show only MTU of 1500.

    my /kernel/drv/e1000g.conf and /var/lib/dpkg/alien/sunwintgige/reloc/kernel/drv/e1000g.conf (If I do not set this one too, the above conf file gets overwritten) are set to: MaxFrameSize=3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3;

    And root@keeper:/kernel/drv# dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g2 phys 1500 up — – e1000g0 phys 9000 up — – e1000g1 phys 1500 up — –

    How do I get the e1000g driver to allow max MTU on all 3 e1000g interfaces?

    Thank you, Matthew

  3. Great review! You actually overviewed some valuable things in this post I came across it by using Google and I’ve got to admit that I already subscribed to the RSS, it’s very great

  4. Great post. I used to be checking constantly this weblog and I am inspired! Extremely useful information specifically the closing phase 🙂 I maintain such info a lot. I used to be seeking this certain information for a very long time. Thank you and good luck.

  5. Intriguing article. I understand I’m somewhat late in posting my comment even so the article would have been to the actual and the information I had been seeking. I can’t say that I agree with all you could mentioned but it really was emphatically fascinating! BTW…I found your website by using a Google search. I’m a frequent visitor for your blog and may return again soon.

Leave a Reply

Your email address will not be published.