IPv6 Tacacs+ Support (tac_plus)

Recently @ Facebook we found that we required IPv6 access to TACACS for auth (AAA) for the majority of our production Network Equipment. Tacacs+ (tac_plus) is an old daemon released by Cisco in the late 90s. It still works (even at our scale) and the config was doing what we required, so it was decided that we should add IPv6 Support to it to move forwards until we no longer require TACACS for authentication, authorization and accounting.

IPv6 has been added in true dirty 90s C code style via pre-processor macros. The source is publicly available via a GitHub Repository.

This version is based off F4.0.4.19 with the following patches (full history can be seen in the Git Repository):

  • Logging modifications
  • PAM Support
  • MD5 support
  • IPv6 (AF_INET6) Socket Listening

Readme.md has most of the information you require to build the software and I have included RPM .spec files (that have been tested on CentOS 6). The specs generate two RPMS with tacacs+6 relying on the tacacs+ rpm to be installed for libraries and man pages.

RPMS Build on CentOS 6.5 x86_64 + SRC rpms avaliable here: https://cooperlees.com/rpms/

Usage Tips:

  • Do not add listen directives into tac_plus.conf so that each daemon can load the same conf file (for consistency)
  • Logging:
    • /var/log/tac_plus.acct and tac_plus6.acct are where accounting information will go (as well as syslog) Logrotate time …
    • /var/log/tac_plus.log and tac_plus6.log is where default debug logs will go
  • Configure syslog to send the LOG_LOCAL3 somewhere useful (this will get both tac_plus and tac_plus6 log information)
  • Pid Files will live in /var/run/tac_plus.pid.0.0.0.0 and tac_plus6.pid.::
  • The RPM does not /sbin/chkconfig –add or enable, so be sure to enable the version of tac_plus you require.

Tested Support on Vendor Hardware

  • Arista EoS (4.13.3F): need to use ‘ipv6 host name ::1’ as TACACS conf can’t handle raw IPv6 addresses (lame) 
  • Cisco NXOS (6.0(2)U2(4) [build 6.0(2)U2(3.6)]):
    feature tacacs+
    tacacs-server key 7 “c00p3rIstheMan”
    tacacs-server host a:cafe::1
    tacacs-server host b:b00c::2
    aaa group server tacacs+ TACACS
    server a:cafe::1
    server b:b00c::2
    source-interface Vlan2001 (ensure what IP request will come from)
  • Juniper: >= Junos 13.3R2.7 required for IPv6 Tacacs (Tested on MX)

I know it’s old school code but please feel free to submit bug patches / enhancements. This should allow us to keep this beast running until we can deprecate it’s need …

Junos Aggregated Ethernet w/LACP and Cisco Nexus Virtual Port Channel

So when I was googiling around looking for working configurations of Junos (EX in this case) AE working with a Cisco vPC (Virtual Port Channel) I could not find any examples … So I said that I would post one. I will not be covering how to set up a VPC, if you’re interested in that side visit Cisco’s guide here. I will also not discuss how to configure a Juniper Virtual Chassis (more info here). The devices used in this example are 2 x Cisco 7k (running NX-OS 4) and 2 x Juniper EX4500 switches (running Junos 11.4R1) in a Mixed Mode virtual chassis with 2 x ex4200s.

The goal, as network engineers is to use all bandwidth when it’s available (if feasible) and avoid legacy protocols to stop layer 2 loops such as Spanning-Tree. vPC from Cisco and VC technologies allow LACP (Link Control Aggregation Protocol) links to span physical chassis, allow the network engineer to avoid single points of failure and harness all available bandwidth. If a physical chassis was lost, you would still be operation in a degraded fashion, e.g. 1/2 the available bandwidth until the second chassis returned.

To configure the Cisco Nexus side you would require the following configuration on each vPC configured chassis. I found that VLAN pruning can be happily done and a Natvie VLAN1 is not needed if CDP is not mandatory (I did not test making CDP able to traverse the trunk through the Juniper – Would love to hear if someone does!).

[plain]
conf t

interface port-channel69
description Good practice
switchport mode trunk
vpc 69
mtu 9216
switchport trunk allowed vlan 69

interface Ethernetx/x
channel-group 69 mode active
[/plain]

Handy Cisco Debug Commands:

  • show vpc
  • show run interface port-channel69 member
  • show vpc consistency-parameters int port-channel 69
  • show port-channel summary

The Juniper side would only require the following, this configuration is identical (you just choose different member interfaces) even if you don’t have a Virtual Chassis configuration.

[plain]
set interfaces xe-0/0/39 ether-options 802.3ad ae0
set interfaces xe-1/0/39 ether-options 802.3ad ae0
set interfaces ae0 description "Good Practice"
set interfaces ae0 mtu 9216
set interfaces ae0 aggregated-ether-options lacp active
set interfaces ae0 unit 0 family ethernet-switching port-mode trunk
set interfaces ae0 unit 0 family ethernet-switching vlan members pr0nNet

set vlans pr0nNet vlan-id 69
set vlans pr0nNet l3-interface vlan.69 #If a L3 RVI is required
[/plain]

Handy Juniper Debug Commands:

  • show interface terse ae0
  • show lacp interfaces (you want your interfaces to be collecting and distributing)
  • show interface ae0 extensive

Please let me know if I have done anything that is not optimal – always eager to learn, I am definitely not (and proud of it) a Cisco expert.

30 Levels of NAT Firewall Lab

So I am a very large geek, and proud of it. It hurt to walk past a cupboard at work knowing there was 30+ Cisco PIX 501 firewalls sitting in there collecting dust. One day it dawned on me, I wonder how crap internet would be sitting behind 30 of those slow ass god awful to use and configure firewalls. So here are the results:

Network Diagram

(Click for larger image)

Sample PIX 501 conf:

[plain]
hostname fwX
password cisco
enable password cisco
domain-name cooperlees.com

ip address inside 10.N.0.1 255.255.255.0
ip address outside 10.N1.0.2 255.255.255.0
interface ethernet0 auto
interface ethernet1 100full

route outside 0 0 10.N1.0.1
nat (inside) 1 10.N.0.0 255.255.255.0
global (outside) 1 interface

access-list outbound permit any any
access-group outbound in interface inside

access-list ping_acl permit icmp any any
access-group ping_acl in interface outside
[/plain]

Video of the Results

Thanks to Jason Leschnik, Anthony Noonan, Kyle Seton and Chris Steven for their assistance.

Cisco Acknowledges AJ2010 Network

Awesome stuff. Was a lot of kit rolled out @ the Cataract Scout Park by Alan, Brett, Colin, Todd and myself – Plus many many others !

Good to see the kit was well used for quality videos like:

More info:
http://blogs.cisco.com/news/comments/cisco_helps_the_22nd_australian_scout_jamboree_reach_new_digital_heights/

Maxing out Gig Ethernet with Sun x4500 Thumpers

Well the time has come where I have finally got some hardware that can max out gig ethernet. I sent 3.4 tb in 9 hours! Thats awesome! GG Cisco 3750 and 2 x Sun x4500 Thumpers running Opensolaris snv_105. Good times – I bet the copper was warm 🙂

root@dumper1:sbin> ./zfsMbufferInitialSend.sh
Starting ZFS send to dumper-tmp.ansto.gov.au @ Fri Mar  6 16:02:47 EST 2009
in @  0.0 kB/s, out @ 34.9 MB/s, 3428 GB total, buffer   0% fullll

summary: 3428 GByte in 8 h 57 min  109 MB/s

real    537m40.533s
user    34m32.281s
sys     326m18.227s
Completed @ Sat Mar  7 01:00:34 EST 2009

If you do use x4500s or have the need to zfs send compile mbuffer today! It rocks – I went from 30mbyte a second with SSH to maxing out Gigabit Ethernet. I will post instructions on everything I did soon.

Solaris 10 + Jumbo Frames + Link Aggregation with Cisco 3750 Switch + NFS Exporting / Mounting

Do you want to get the most out of our x4500 network throughput with NFS. Read away if you do.

So, at work I am lucky enough to get to play with 3 Sun x4500 x86_64 Thumper Systems. You may be sitting there and saying big deal, I say it’s a lot of disk and sweet sexy Sun hardware.

The Sun x4500 Thumper
The Sun x4500 Thumper

I have posted this due to the hard time I found trying to find information on linking the Network Interfaces and using Jumbo Frames to maximise your network throughput from your x4500.

I have a x4500, using jumbo frames and has two Gig (e1000g0) interfaces running Solaris 10u6 with a rpool and a big fat data pool I call cesspool. I have shares exported by nfs. Below I will detail my conf and what I have found to be the best performing NFS mounting options from clients.

I did try to do this when I had the x4500 on 10u5, but had difficulties. Hosts that were not on the same switch as the device were having speed issues with NFS. I contacted Sun and got some things to try, along with things I tried and below is the end conf I have found to work best, please let me know if you have found better results or success with different configurations. Please note, I am now running Solaris 10u6, and apparently there was a bug with 10u5 and the e1000g driver.

1) Enabiling Jumbo Frames

Host (Solaris) Config:

On Solaris two things must be done to enable jumbo frames. Please ensure the switch is configured before enabiling the host:

HOSTNAME=god
INTERFACE=e1000g0
SIZE=9000

  1. Enable it on the driver – e.g. e1000g conf = /kernel/drv/e1000g.conf
    • Reboot will be required if not already enabled
  2. Enable Jumbo Frames it with ifconfig
    • From CLI = ifconfig ${INTERFACE} mtu ${SIZE}
    • At Boot = make /etc/hostname.${INTERFACE} =
    • ${HOSTNAME} mtu ${SIZE}

    – This has been tested on both Solaris 10u6 and Opensolaris 2008.11

Switch Config:

system mtu jumbo 9000 (this gets hidden in the IOS defaults)
system mtu routing 1500 (this is an auto insert command by IOS)

Show system mtu Output:
System MTU size is 1500 bytes
System Jumbo MTU size is 9000 bytes
Routing MTU size is 1500 bytes

Remember to copy run start once happy with config 🙂

2) Enabling Aggregated Interfaces

Host (Solaris) Config:

I wrote a script to apply. This script asumes you already have /etc/defaultrouter, /etc/netmasks, /etc/resolv.conf and /etc/nsswitch.conf all setcorrectly

Here is the script I used to apply the conf:

#!/usr/bin/bash

# Create Link aggr on plumper
# Ether Channel on Swith Ports 2 on each 3750 – 20081223

# Do I want these ?
# -l = LCAP mode – active, passive or disabled
# -T time – LCAP Timer …

ifconfig e1000g0 unplumb
ifconfig e1000g1 unplumb

# Sun’s Suggestion
dladm create-aggr -P L4 -l active -d e1000g0 -d e1000g1 1

# Move hostname file
mv /etc/hostname.e1000g0 /etc/hostname.aggr1

# Check Link
dladm show-aggr 1

# Set device IP # Can set MTU here if jumbo enabled
ifconfig aggr1 plumb x.x.x.x up

# Show me devs / links so I can watch
dladm show-dev -s -i 2

Switch Config:

# = Insert Integer

Configure a Port Group:

  • interface Port-channel#
    • switchport access vlan #
    • switchport mode access
  • exit
  • port-channel load-balance src-dst-ip

Please configure the ports you want in the channel (4 max) required as following:

# = Insert Integer

  • config term
    • interface INTERFACE
      • channel-group # mode passive
      • channel-protocol lacp
      • switchport access vlan #
      • switchport mode access
      • exit
    • end
  • show run (to verify)

Remember to copy run start once happy with config 🙂

3) Nfs Sharing w/zfs

This was another silly little mistake I was doing, I was turning sharenfs=on with the ZFS file systems I wished to share and then trying to apply the shares properties using share command and adding entries to the sharetab manually. With ZFS tho, all your NFS options should be applied to the sharenfs attribute on the ZFS filesystem, as the following example:

  • zfs set sharenfs=ro,rw=god.cooperlees.com,root=god.cooperlees.com

These arguments get pased to ‘share’ via ZFS @ boot time.

4) NFS Mount Options

Most of my clients (that I have tuned) are Linux boxes, running Scientific Linux 5.2 (a Redhat deriviative – similiar to CentOS). I have found once jumbo frames and aggregated interfaces are involved TCP performs better. By default, tcp is used on modern Linux nfs clients, but on older Linux, Irix etc. UDP is, which, once you try to move a large amount of data will not work if the host has a different MTU to the file server. (With old OS’s like this running you can tell I work @ a cientific research facility). Here are some examples of my mount options in /etc/fstab on these boxes:

Modern Linux Machines: (CentOS 5, Scientific Linux 5):
god.cooperlees.com:/cesspool/home      /home   nfs     defaults,bg,intr,hard,noacl     0 0

Old Linux Machines: (Redhat 7 etc.)
god.cooperlees.com:/cesspool/home /home          nfs     defaults,bg,intr,hard,tcp 0 0
-No mention of ACL’s and UDP is default here

Irix 6.5 (yuck – MIPS):
god.cooperlees.com:/cesspool/home /home nfs defaults,rw,sync,proto=tcp
-No acl and once again UDP …