Recently @ Facebook we found that we required IPv6 access to TACACS for auth (AAA) for the majority of our production Network Equipment. Tacacs+ (tac_plus) is an old daemon released by Cisco in the late 90s. It still works (even at our scale) and the config was doing what we required, so it was decided that we should add IPv6 Support to it to move forwards until we no longer require TACACS for authentication, authorization and accounting.
IPv6 has been added in true dirty 90s C code style via pre-processor macros. The source is publicly available via a GitHub Repository.
This version is based off F18.104.22.168 with the following patches (full history can be seen in the Git Repository):
IPv6 (AF_INET6) Socket Listening
Readme.md has most of the information you require to build the software and I have included RPM .spec files (that have been tested on CentOS 6). The specs generate two RPMS with tacacs+6 relying on the tacacs+ rpm to be installed for libraries and man pages.
Do not add listen directives into tac_plus.conf so that each daemon can load the same conf file (for consistency)
/var/log/tac_plus.acct and tac_plus6.acct are where accounting information will go (as well as syslog) – Logrotate time …
/var/log/tac_plus.log and tac_plus6.log is where default debug logs will go
Configure syslog to send the LOG_LOCAL3 somewhere useful (this will get both tac_plus and tac_plus6 log information)
Pid Files will live in /var/run/tac_plus.pid.0.0.0.0 and tac_plus6.pid.::
The RPM does not /sbin/chkconfig –add or enable, so be sure to enable the version of tac_plus you require.
Tested Support on Vendor Hardware
Arista EoS (4.13.3F): need to use ‘ipv6 host name ::1’ as TACACS conf can’t handle raw IPv6 addresses (lame)
Cisco NXOS (6.0(2)U2(4) [build 6.0(2)U2(3.6)]):
tacacs-server key 7 “c00p3rIstheMan”
tacacs-server host a:cafe::1
tacacs-server host b:b00c::2
aaa group server tacacs+ TACACS
source-interface Vlan2001 (ensure what IP request will come from)
Juniper: >= Junos 13.3R2.7 required for IPv6 Tacacs (Tested on MX)
I know it’s old school code but please feel free to submit bug patches / enhancements. This should allow us to keep this beast running until we can deprecate it’s need …
So when I was googiling around looking for working configurations of Junos (EX in this case) AE working with a Cisco vPC (Virtual Port Channel) I could not find any examples … So I said that I would post one. I will not be covering how to set up a VPC, if you’re interested in that side visit Cisco’s guide here. I will also not discuss how to configure a Juniper Virtual Chassis (more info here). The devices used in this example are 2 x Cisco 7k (running NX-OS 4) and 2 x Juniper EX4500 switches (running Junos 11.4R1) in a Mixed Mode virtual chassis with 2 x ex4200s.
The goal, as network engineers is to use all bandwidth when it’s available (if feasible) and avoid legacy protocols to stop layer 2 loops such as Spanning-Tree. vPC from Cisco and VC technologies allow LACP (Link Control Aggregation Protocol) links to span physical chassis, allow the network engineer to avoid single points of failure and harness all available bandwidth. If a physical chassis was lost, you would still be operation in a degraded fashion, e.g. 1/2 the available bandwidth until the second chassis returned.
To configure the Cisco Nexus side you would require the following configuration on each vPC configured chassis. I found that VLAN pruning can be happily done and a Natvie VLAN1 is not needed if CDP is not mandatory (I did not test making CDP able to traverse the trunk through the Juniper – Would love to hear if someone does!).
description Good practice
switchport mode trunk
switchport trunk allowed vlan 69
channel-group 69 mode active
Handy Cisco Debug Commands:
show run interface port-channel69 member
show vpc consistency-parameters int port-channel 69
show port-channel summary
The Juniper side would only require the following, this configuration is identical (you just choose different member interfaces) even if you don’t have a Virtual Chassis configuration.
set interfaces xe-0/0/39 ether-options 802.3ad ae0
set interfaces xe-1/0/39 ether-options 802.3ad ae0
set interfaces ae0 description "Good Practice"
set interfaces ae0 mtu 9216
set interfaces ae0 aggregated-ether-options lacp active
set interfaces ae0 unit 0 family ethernet-switching port-mode trunk
set interfaces ae0 unit 0 family ethernet-switching vlan members pr0nNet
set vlans pr0nNet vlan-id 69
set vlans pr0nNet l3-interface vlan.69 #If a L3 RVI is required
Handy Juniper Debug Commands:
show interface terse ae0
show lacp interfaces (you want your interfaces to be collecting and distributing)
show interface ae0 extensive
Please let me know if I have done anything that is not optimal – always eager to learn, I am definitely not (and proud of it) a Cisco expert.
So I am a very large geek, and proud of it. It hurt to walk past a cupboard at work knowing there was 30+ Cisco PIX 501 firewalls sitting in there collecting dust. One day it dawned on me, I wonder how crap internet would be sitting behind 30 of those slow ass god awful to use and configure firewalls. So here are the results:
Well the time has come where I have finally got some hardware that can max out gig ethernet. I sent 3.4 tb in 9 hours! Thats awesome! GG Cisco 3750 and 2 x Sun x4500 Thumpers running Opensolaris snv_105. Good times – I bet the copper was warm 🙂
Starting ZFS send to dumper-tmp.ansto.gov.au @ Fri Mar 6 16:02:47 EST 2009
in @ 0.0 kB/s, out @ 34.9 MB/s, 3428 GB total, buffer 0% fullll
summary: 3428 GByte in 8 h 57 min 109 MB/s
Completed @ Sat Mar 7 01:00:34 EST 2009
If you do use x4500s or have the need to zfs send compile mbuffer today! It rocks – I went from 30mbyte a second with SSH to maxing out Gigabit Ethernet. I will post instructions on everything I did soon.
Do you want to get the most out of our x4500 network throughput with NFS. Read away if you do.
So, at work I am lucky enough to get to play with 3 Sun x4500 x86_64 Thumper Systems. You may be sitting there and saying big deal, I say it’s a lot of disk and sweet sexy Sun hardware.
I have posted this due to the hard time I found trying to find information on linking the Network Interfaces and using Jumbo Frames to maximise your network throughput from your x4500.
I have a x4500, using jumbo frames and has two Gig (e1000g0) interfaces running Solaris 10u6 with a rpool and a big fat data pool I call cesspool. I have shares exported by nfs. Below I will detail my conf and what I have found to be the best performing NFS mounting options from clients.
I did try to do this when I had the x4500 on 10u5, but had difficulties. Hosts that were not on the same switch as the device were having speed issues with NFS. I contacted Sun and got some things to try, along with things I tried and below is the end conf I have found to work best, please let me know if you have found better results or success with different configurations. Please note, I am now running Solaris 10u6, and apparently there was a bug with 10u5 and the e1000g driver.
1) Enabiling Jumbo Frames
Host (Solaris) Config:
On Solaris two things must be done to enable jumbo frames. Please ensure the switch is configured before enabiling the host:
Enable it on the driver – e.g. e1000g conf = /kernel/drv/e1000g.conf
# Set device IP # Can set MTU here if jumbo enabled
ifconfig aggr1 plumb x.x.x.x up
# Show me devs / links so I can watch
dladm show-dev -s -i 2
# = Insert Integer
Configure a Port Group:
switchport access vlan #
switchport mode access
port-channel load-balance src-dst-ip
Please configure the ports you want in the channel (4 max) required as following:
# = Insert Integer
channel-group # mode passive
switchport access vlan #
switchport mode access
show run (to verify)
Remember to copy run start once happy with config 🙂
3) Nfs Sharing w/zfs
This was another silly little mistake I was doing, I was turning sharenfs=on with the ZFS file systems I wished to share and then trying to apply the shares properties using share command and adding entries to the sharetab manually. With ZFS tho, all your NFS options should be applied to the sharenfs attribute on the ZFS filesystem, as the following example:
zfs set sharenfs=ro,rw=god.cooperlees.com,root=god.cooperlees.com
These arguments get pased to ‘share’ via ZFS @ boot time.
4) NFS Mount Options
Most of my clients (that I have tuned) are Linux boxes, running Scientific Linux 5.2 (a Redhat deriviative – similiar to CentOS). I have found once jumbo frames and aggregated interfaces are involved TCP performs better. By default, tcp is used on modern Linux nfs clients, but on older Linux, Irix etc. UDP is, which, once you try to move a large amount of data will not work if the host has a different MTU to the file server. (With old OS’s like this running you can tell I work @ a cientific research facility). Here are some examples of my mount options in /etc/fstab on these boxes:
Modern Linux Machines: (CentOS 5, Scientific Linux 5):
god.cooperlees.com:/cesspool/home /home nfs defaults,bg,intr,hard,noacl 0 0
Old Linux Machines: (Redhat 7 etc.)
god.cooperlees.com:/cesspool/home /home nfs defaults,bg,intr,hard,tcp 0 0
-No mention of ACL’s and UDP is default here
Irix 6.5 (yuck – MIPS):
god.cooperlees.com:/cesspool/home /home nfs defaults,rw,sync,proto=tcp
-No acl and once again UDP …