Comments

IPv6 Tacacs+ Support (tac_plus)

Posted by cooper on Jun 17, 2014 in cisco, g33k, juniper, linux, tech

Recently @ Facebook we found that we required IPv6 access to TACACS for auth (AAA) for the majority of our production Network Equipment. Tacacs+ (tac_plus) is an old daemon released by Cisco in the late 90s. It still works (even at our scale) and the config was doing what we required, so it was decided that we should add IPv6 Support to it to move forwards until we no longer require TACACS for authentication, authorization and accounting.

IPv6 is good for you ...

Get IPv6’in

IPv6 has been added in true dirty 90s C code style via pre-processor macros. The source is publicly available via a GitHub Repository.

This version is based off F4.0.4.19 with the following patches (full history can be seen in the Git Repository):

  • Logging modifications
  • PAM Support
  • MD5 support
  • IPv6 (AF_INET6) Socket Listening

Readme.md has most of the information you require to build the software and I have included RPM .spec files (that have been tested on CentOS 6). The specs generate two RPMS with tacacs+6 relying on the tacacs+ rpm to be installed for libraries and man pages.

RPMS Build on CentOS 6.5 x86_64 + SRC rpms avaliable here: http://cooperlees.com/rpms/

Usage Tips:

  • Do not add listen directives into tac_plus.conf so that each daemon can load the same conf file (for consistency)
  • Logging:
    • /var/log/tac_plus.acct and tac_plus6.acct are where accounting information will go (as well as syslog) Logrotate time …
    • /var/log/tac_plus.log and tac_plus6.log is where default debug logs will go
  • Configure syslog to send the LOG_LOCAL3 somewhere useful (this will get both tac_plus and tac_plus6 log information)
  • Pid Files will live in /var/run/tac_plus.pid.0.0.0.0 and tac_plus6.pid.::
  • The RPM does not /sbin/chkconfig –add or enable, so be sure to enable the version of tac_plus you require.

Tested Support on Vendor Hardware

  • Arista EoS (4.13.3F): need to use ‘ipv6 host name ::1’ as TACACS conf can’t handle raw IPv6 addresses (lame) 
  • Cisco NXOS (6.0(2)U2(4) [build 6.0(2)U2(3.6)]):
    feature tacacs+
    tacacs-server key 7 “c00p3rIstheMan”
    tacacs-server host a:cafe::1
    tacacs-server host b:b00c::2
    aaa group server tacacs+ TACACS
    server a:cafe::1
    server b:b00c::2
    source-interface Vlan2001 (ensure what IP request will come from)
  • Juniper: >= Junos 13.3R2.7 required for IPv6 Tacacs (Tested on MX)

I know it’s old school code but please feel free to submit bug patches / enhancements. This should allow us to keep this beast running until we can deprecate it’s need …

Tags: , , , , , , , , , , , , , , , , , , ,

 
Comments

RANCID with Junos Read-Only User

Posted by cooper on Nov 9, 2012 in g33k, juniper

Here is the setting for a Junos device to create a user with read only privileges to allow RANCID to work.

set system login class RANCID permissions access
set system login class RANCID permissions admin
set system login class RANCID permissions firewall
set system login class RANCID permissions flow-tap
set system login class RANCID permissions interface
set system login class RANCID permissions network
set system login class RANCID permissions routing
set system login class RANCID permissions secret
set system login class RANCID permissions security
set system login class RANCID permissions snmp
set system login class RANCID permissions storage
set system login class RANCID permissions system
set system login class RANCID permissions trace
set system login class RANCID permissions view
set system login class RANCID permissions view-configuration

set system login user rancid full-name RANCID
set system login user rancid class RANCID
set system login user rancid authentication encrypted-password “xxx”

Tags: , , , , , , , ,

 
Comments

Updating Juniper QFabric

Posted by cooper on Sep 27, 2012 in g33k, juniper

The follow post shows output obtained and the  upgrade process performed recently on a clients QFabric system. This output was captured updating from 12.2X30 to 12.2X50 Junos release via a ‘Non Stop Services Upgrade’ (NSSU) method. This method basically is a very conservative approach updating redundant components one at a time.

The overall process is:

  1. Upgrade Director Group
  2. Upgrade QFabric Interconnects
  3. Upgrade each node group
    1. Network Node group (NW-NG-01)
    2. Each redundant server node group (RSNG)
    3. Each server node group (my client did not have any SNGs)

Before Upgrade Backup

All that is required to be backed up is the QFabric configuration file, everything else about the install is the QFabric standard and able to be restored using documented Juniper methods.

To backup the config log into the device and:

  1. Capture the output from ‘show configuration | no-more’

or

  1. ‘show configuration | save QFabric.conf’
    1. Remotely: scp username@x.x.x.x:/pbdata/packages/QFabric.conf

Upgrade Process with Output

Director Group Upgrade

Copy the RPM image to the director to /pbdata/packages. This process takes around 2 hours. We started at 7:15am and finished at 9:15am.

  1. scp FILE.rpm root@x.x.x.x:/pbdata/packages
  2. Log into the DG via the VIP and start the upgrade
  • request system software nonstop-upgrade director-group FILE.rpm
    • Junos looks in /pbdata/packages by default

Upgrade Output:

root@FSASYDBRDQFAB01> request system software nonstop-upgrade director-group jinstall-qfabric-12.2X50-D20.4.rpmValidating update package jinstall-qfabric-12.2X50-D20.4.rpmInstalling update package jinstall-qfabric-12.2X50-D20.4.rpmInstalling fabric images version 12.2X50-D20.4Performing cleanupPackage install completeInstalling update package jinstall-qfabric-12.2X50-D20.4.rpm on peer

Triggering Initial Stage of Fabric Manager Upgrade

Updating CCIF default image to 12.2X50-D20.4

Updating FM-0 to Junos version 12.2X50-D20.4

[Status   2012-09-24 14:43:37]: Fabric Manager: Upgrade Initial Stage started

[FM-0     2012-09-24 14:43:52]: Transferring FM-0 Mastership to LOCAL DG

[FM-0     2012-09-24 14:45:44]: Finished FM-0 Mastership switch

[NW-NG-0  2012-09-24 14:45:59]: Transferring NW-NG-0 Mastership to LOCAL DG

[NW-NG-0  2012-09-24 14:47:22]: Finished NW-NG-0 Mastership switch

[FM-0     2012-09-24 14:48:10]: Retrieving package

[FM-0     2012-09-24 14:49:13]: Retrieving package

[FM-0     2012-09-24 14:50:15]: Pushing bundle to re0

[Status   2012-09-24 14:52:03]: Load completed with 0 errors

[Status   2012-09-24 14:52:03]: Reboot is required to complete upgrade

[Status   2012-09-24 14:52:04]: Trying to Connect to Node: FM-0

[Status   2012-09-24 14:52:19]: Rebooting FM-0

[FM-0     2012-09-24 14:52:19]: Waiting for FM-0 to terminate

Starting Peer upgrade

Initiating rolling upgrade of Director peer:  version 12.2X50-D20.4

Inform CCIF regarding rolling upgrade

[Peer Update Status]: Validating install package jinstall-qfabric-12.2X50-D20.4.rpm

[Peer Update Status]: jinstall-qfabric-12.2X50.D20.4-4

[Peer Update Status]: Cleaning up node for rolling phase one upgrade

[Peer Update Status]: Director group upgrade complete

[Peer Update Status]: COMPLETED

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to return after reboot and start phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to complete phase one of rolling upgrade

[Peer Update Status]: Waiting for peer to complete phase one of rolling upgrade

[Peer Update Status]: Peer completed phase one of rolling upgrade

Setting peer DG node as the master SFC

Delaying start of local upgrade to allow peer services time to initialize [15 minutes]

Delaying start of local upgrade to allow peer services time to initialize [15 minutes]

Delaying start of local upgrade to allow peer services time to initialize [12 minutes]

Delaying start of local upgrade to allow peer services time to initialize [9 minutes]

Delaying start of local upgrade to allow peer services time to initialize [6 minutes]

Delaying start of local upgrade to allow peer services time to initialize [3 minutes]

[Peer Update Status]: Check for VMs on dg0

Triggering Final Stage of Fabric Manager Upgrade:

Updating FM-0 to Junos version 12.2X50-D20.4

[Status   2012-09-24 15:33:31]: Fabric Manager: Upgrade Final Stage started

[NW-NG-0  2012-09-24 15:33:45]: Transferring NW-NG-0 Mastership to REMOTE DG

[NW-NG-0  2012-09-24 15:35:08]: Finished NW-NG-0 Mastership switch

[Status   2012-09-24 15:35:08]: Upgrading FM-0 VM on worker DG to 12.2X50-D20.4

[DRE-0    2012-09-24 15:36:09]: Retrieving package

[DRE-0    2012-09-24 15:37:02]: ——- re0: ——-

[Status   2012-09-24 15:38:28]: Load completed with 0 errors

[Status   2012-09-24 15:38:28]: Reboot is required to complete upgrade

[DRE-0    2012-09-24 15:38:34]: Waiting for DRE-0 to terminate

[DRE-0    2012-09-24 15:38:46]: Waiting for DRE-0 to come back

[DRE-0    2012-09-24 15:42:00]: Running Uptime Test for DRE-0

[DRE-0    2012-09-24 15:42:06]: Uptime Test for DRE-0 Passed

[Status   2012-09-24 15:42:06]: DRE-0 Booted successfully

Performing post install shutdown and cleanup

Broadcast message from root (Mon Sep 24 15:42:07 2012):

The system is going down for reboot NOW!

Director group upgrade complete

Interconnect Upgrade

This process takes around an hour. It will upgrade Junos on each System Control Board (SCB) partition grabbing the code automatically via the FTP running on the active Director Group member. We observed roughly that time starting at 7:15am and finished at 9:15am.

  1. From the DG CLI initiate the config:
  • request system software nonstop-upgrade fabric FILE.rpm

Output:

[FC-0     2012-09-24 16:22:17]: Retrieving package[FC-1     2012-09-24 16:22:18]: Retrieving package[IC-F7811 2012-09-24 16:22:39]: Retrieving package[IC-F7712 2012-09-24 16:22:41]: Retrieving package[FC-0     2012-09-24 16:23:14]: Validating on re0[FC-1     2012-09-24 16:23:18]: Validating on re0[IC-F7712 2012-09-24 16:23:57]: Pushing bundle to re1

[IC-F7811 2012-09-24 16:23:58]: Pushing bundle to re1

[IC-F7712 2012-09-24 16:24:47]: Validating on re1

[IC-F7811 2012-09-24 16:24:48]: Validating on re1

[FC-0     2012-09-24 16:25:02]: Done with validate on all chassis

[FC-0     2012-09-24 16:25:02]: ——- re0: ——-

[FC-1     2012-09-24 16:25:11]: Done with validate on all chassis

[FC-1     2012-09-24 16:25:11]: ——- re0: ——-

[IC-F7712 2012-09-24 16:29:51]: Validating on re0

[IC-F7811 2012-09-24 16:30:48]: Validating on re0

[IC-F7712 2012-09-24 16:34:10]: Done with validate on all chassis

[IC-F7712 2012-09-24 16:34:10]: ——- re1: ——-

[IC-F7811 2012-09-24 16:34:20]: Done with validate on all chassis

[IC-F7811 2012-09-24 16:34:20]: ——- re1: ——-

[IC-F7712 2012-09-24 16:34:55]: Step 1 of 20 Creating temporary file system

[IC-F7712 2012-09-24 16:34:55]: Step 2 of 20 Determining installation source

[IC-F7712 2012-09-24 16:34:55]: Step 3 of 20 Processing format options

[IC-F7712 2012-09-24 16:34:55]: Step 4 of 20 Determining installation slice

[IC-F7712 2012-09-24 16:34:56]: Step 5 of 20 Creating and labeling new slices

[IC-F7811 2012-09-24 16:34:56]: Step 1 of 20 Creating temporary file system

[IC-F7712 2012-09-24 16:34:56]: Step 6 of 20 Create and mount new file system

[IC-F7811 2012-09-24 16:34:57]: Step 2 of 20 Determining installation source

[IC-F7811 2012-09-24 16:34:57]: Step 3 of 20 Processing format options

[IC-F7811 2012-09-24 16:34:57]: Step 4 of 20 Determining installation slice

[IC-F7811 2012-09-24 16:34:58]: Step 5 of 20 Creating and labeling new slices

[IC-F7811 2012-09-24 16:34:58]: Step 6 of 20 Create and mount new file system

[IC-F7712 2012-09-24 16:35:04]: Step 7 of 20 Getting OS bundles

[IC-F7712 2012-09-24 16:35:04]: Step 8 of 20 Updating recovery media

[IC-F7811 2012-09-24 16:35:07]: Step 7 of 20 Getting OS bundles

[IC-F7811 2012-09-24 16:35:07]: Step 8 of 20 Updating recovery media

[IC-F7712 2012-09-24 16:35:27]: Step 9 of 20 Extracting incoming image

[IC-F7811 2012-09-24 16:35:30]: Step 9 of 20 Extracting incoming image

[IC-F7712 2012-09-24 16:36:38]: Step 10 of 20 Unpacking OS packages

[IC-F7712 2012-09-24 16:36:41]: Step 11 of 20 Mounting jbase package

[IC-F7811 2012-09-24 16:36:42]: Step 10 of 20 Unpacking OS packages

[IC-F7811 2012-09-24 16:36:45]: Step 11 of 20 Mounting jbase package

[IC-F7712 2012-09-24 16:37:05]: Step 12 of 20 Creating base OS symbolic links

[IC-F7811 2012-09-24 16:37:09]: Step 12 of 20 Creating base OS symbolic links

[IC-F7712 2012-09-24 16:38:03]: Step 13 of 20 Creating fstab

[IC-F7712 2012-09-24 16:38:03]: Step 14 of 20 Creating new system files

[IC-F7712 2012-09-24 16:38:04]: Step 15 of 20 Adding jbundle package

[IC-F7811 2012-09-24 16:38:07]: Step 13 of 20 Creating fstab

[IC-F7811 2012-09-24 16:38:07]: Step 14 of 20 Creating new system files

[IC-F7811 2012-09-24 16:38:07]: Step 15 of 20 Adding jbundle package

[IC-F7712 2012-09-24 16:40:35]: Step 16 of 20 Backing up system data

[IC-F7811 2012-09-24 16:40:36]: Step 16 of 20 Backing up system data

[IC-F7712 2012-09-24 16:40:37]: Step 17 of 20 Setting up shared partition data

[IC-F7811 2012-09-24 16:40:37]: Step 17 of 20 Setting up shared partition data

[IC-F7712 2012-09-24 16:40:37]: Step 18 of 20 Checking package sanity in installation

[IC-F7712 2012-09-24 16:40:37]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[IC-F7811 2012-09-24 16:40:37]: Step 18 of 20 Checking package sanity in installation

[IC-F7811 2012-09-24 16:40:37]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[IC-F7712 2012-09-24 16:40:40]: Step 20 of 20 Setting da0s1 as new active partition

[IC-F7811 2012-09-24 16:40:41]: Step 20 of 20 Setting da0s1 as new active partition

[IC-F7712 2012-09-24 16:40:50]: ——- re0: ——-

[IC-F7811 2012-09-24 16:40:52]: ——- re0: ——-

[IC-F7712 2012-09-24 16:41:36]: Step 1 of 20 Creating temporary file system

[IC-F7712 2012-09-24 16:41:36]: Step 2 of 20 Determining installation source

[IC-F7712 2012-09-24 16:41:37]: Step 3 of 20 Processing format options

[IC-F7712 2012-09-24 16:41:37]: Step 4 of 20 Determining installation slice

[IC-F7712 2012-09-24 16:41:38]: Step 5 of 20 Creating and labeling new slices

[IC-F7712 2012-09-24 16:41:38]: Step 6 of 20 Create and mount new file system

[IC-F7811 2012-09-24 16:41:39]: Step 1 of 20 Creating temporary file system

[IC-F7811 2012-09-24 16:41:39]: Step 2 of 20 Determining installation source

[IC-F7811 2012-09-24 16:41:40]: Step 3 of 20 Processing format options

[IC-F7811 2012-09-24 16:41:40]: Step 4 of 20 Determining installation slice

[IC-F7811 2012-09-24 16:41:41]: Step 5 of 20 Creating and labeling new slices

[IC-F7811 2012-09-24 16:41:42]: Step 6 of 20 Create and mount new file system

[IC-F7712 2012-09-24 16:41:49]: Step 7 of 20 Getting OS bundles

[IC-F7712 2012-09-24 16:41:50]: Step 8 of 20 Updating recovery media

[IC-F7811 2012-09-24 16:41:51]: Step 7 of 20 Getting OS bundles

[IC-F7811 2012-09-24 16:41:51]: Step 8 of 20 Updating recovery media

[IC-F7712 2012-09-24 16:42:15]: Step 9 of 20 Extracting incoming image

[IC-F7811 2012-09-24 16:42:19]: Step 9 of 20 Extracting incoming image

[IC-F7712 2012-09-24 16:44:01]: Step 10 of 20 Unpacking OS packages

[IC-F7712 2012-09-24 16:44:04]: Step 11 of 20 Mounting jbase package

[IC-F7811 2012-09-24 16:44:05]: Step 10 of 20 Unpacking OS packages

[IC-F7811 2012-09-24 16:44:07]: Step 11 of 20 Mounting jbase package

[IC-F7712 2012-09-24 16:44:36]: Step 12 of 20 Creating base OS symbolic links

[IC-F7811 2012-09-24 16:44:40]: Step 12 of 20 Creating base OS symbolic links

[IC-F7712 2012-09-24 16:46:01]: Step 13 of 20 Creating fstab

[IC-F7712 2012-09-24 16:46:01]: Step 14 of 20 Creating new system files

[IC-F7712 2012-09-24 16:46:01]: Step 15 of 20 Adding jbundle package

[IC-F7811 2012-09-24 16:46:06]: Step 13 of 20 Creating fstab

[IC-F7811 2012-09-24 16:46:06]: Step 14 of 20 Creating new system files

[IC-F7811 2012-09-24 16:46:06]: Step 15 of 20 Adding jbundle package

[IC-F7712 2012-09-24 16:49:41]: Step 16 of 20 Backing up system data

[IC-F7811 2012-09-24 16:49:45]: Step 16 of 20 Backing up system data

[IC-F7811 2012-09-24 16:49:47]: Step 17 of 20 Setting up shared partition data

[IC-F7811 2012-09-24 16:49:48]: Step 18 of 20 Checking package sanity in installation

[IC-F7811 2012-09-24 16:49:48]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[IC-F7811 2012-09-24 16:49:51]: Step 20 of 20 Setting da0s1 as new active partition

[IC-F7712 2012-09-24 16:51:13]: Step 17 of 20 Setting up shared partition data

[IC-F7712 2012-09-24 16:51:14]: Step 18 of 20 Checking package sanity in installation

[IC-F7712 2012-09-24 16:51:14]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[IC-F7712 2012-09-24 16:51:17]: Step 20 of 20 Setting da0s1 as new active partition

[Status   2012-09-24 16:51:32]: Load completed with 0 errors

[Status   2012-09-24 16:51:32]: Reboot is required to complete upgrade

[Status   2012-09-24 16:51:32]: Rebooting FC-1

[FC-1     2012-09-24 16:51:33]: Waiting for FC-1 to terminate

[FC-1     2012-09-24 16:52:18]: Waiting for FC-1 to come back

[FC-1     2012-09-24 16:55:10]: Running Uptime Test for FC-1

[FC-1     2012-09-24 16:55:26]: Uptime Test for FC-1 Passed

[Status   2012-09-24 16:55:27]: FC-1 Booted successfully

[Status   2012-09-24 16:55:27]: Rebooting FC-0

[FC-0     2012-09-24 16:55:27]: Waiting for FC-0 to terminate

[FC-0     2012-09-24 16:56:12]: Waiting for FC-0 to come back

[FC-0     2012-09-24 16:59:06]: Running Uptime Test for FC-0

[FC-0     2012-09-24 16:59:22]: Uptime Test for FC-0 Passed

[Status   2012-09-24 16:59:22]: FC-0 Booted successfully

[Status   2012-09-24 16:59:22]: Rebooting IC-F7811

[IC-F7811 2012-09-24 16:59:28]: Waiting for IC-F7811 to terminate

[IC-F7811 2012-09-24 16:59:59]: Waiting for IC-F7811 to come back

[IC-F7811 2012-09-24 17:06:45]: Running Uptime Test for IC-F7811

[IC-F7811 2012-09-24 17:07:34]: Waiting for FM to be ready

[IC-F7811 2012-09-24 17:13:09]: Performing post-boot Health-Check

[IC-F7811 2012-09-24 17:14:24]: Waiting for routes to sync

[IC-F7811 2012-09-24 17:14:32]: Uptime Test for IC-F7811 Passed

[Status   2012-09-24 17:14:32]: IC-F7811 Booted successfully

[Status   2012-09-24 17:14:32]: Rebooting IC-F7712

[IC-F7712 2012-09-24 17:14:34]: Waiting for IC-F7712 to terminate

[IC-F7712 2012-09-24 17:15:07]: Waiting for IC-F7712 to come back

[IC-F7712 2012-09-24 17:22:03]: Running Uptime Test for IC-F7712

[IC-F7712 2012-09-24 17:22:47]: Waiting for FM to be ready

[IC-F7712 2012-09-24 17:29:28]: Performing post-boot Health-Check

[IC-F7712 2012-09-24 17:30:43]: Waiting for routes to sync

[IC-F7712 2012-09-24 17:30:49]: Uptime Test for IC-F7712 Passed

[Status   2012-09-24 17:30:50]: IC-F7712 Booted successfully

Success

Node Group Upgrades

The NWNG took around an hour (for 4 nodes) and around 40 minutes for a RSNG. This process upgrades a node at a time in the group and updates both slices. Currently there is no command to verify each slice’s version, it is a known issue.

Node Groups tested were 1 Network node group and 2 RSNGs:

  • NW-NG-0
  • RSNG01
  • RSNG02
  1. From the DG CLI initiate the config:
  • request system software nonstop-upgrade node-group GROUP-NAME FILE.rpm

Output:

root@FSASYDBRDQFAB01> …0-D20.4.rpm node-group NW-NG-0Upgrading target(s): NW-NG-0[NW-NG-0  2012-09-24 17:33:25]: Starting with package ftp://169.254.0.3/pub/images/12.2X50-D20.4/jinstall-qfx.tgz[NW-NG-0  2012-09-24 17:33:25]: Retrieving package[NW-NG-0  2012-09-24 17:34:47]: Pushing bundle to P6172-C[NW-NG-0  2012-09-24 17:35:20]: Pushing bundle to P6136-C[NW-NG-0  2012-09-24 17:35:53]: Pushing bundle to fpc4

[NW-NG-0  2012-09-24 17:36:27]: Pushing bundle to fpc5

[NW-NG-0  2012-09-24 17:36:59]: P6172-C: Validate package…

[NW-NG-0  2012-09-24 17:43:31]: P6136-C: Validate package…

[NW-NG-0  2012-09-24 17:43:31]: fpc4: Validate package…

[NW-NG-0  2012-09-24 17:43:41]: fpc5: Validate package…

[NW-NG-0  2012-09-24 17:43:41]: ——- P6172-C ——-

[NW-NG-0  2012-09-24 17:44:17]: Step 1 of 20 Creating temporary file system

[NW-NG-0  2012-09-24 17:44:17]: Step 2 of 20 Determining installation source

[NW-NG-0  2012-09-24 17:44:18]: Step 3 of 20 Processing format options

[NW-NG-0  2012-09-24 17:44:18]: Step 4 of 20 Determining installation slice

[NW-NG-0  2012-09-24 17:44:18]: Step 5 of 20 Creating and labeling new slices

[NW-NG-0  2012-09-24 17:44:19]: Step 6 of 20 Create and mount new file system

[NW-NG-0  2012-09-24 17:44:27]: Step 7 of 20 Getting OS bundles

[NW-NG-0  2012-09-24 17:44:27]: Step 8 of 20 Updating recovery media

[NW-NG-0  2012-09-24 17:44:48]: Step 9 of 20 Extracting incoming image

[NW-NG-0  2012-09-24 17:46:02]: Step 10 of 20 Unpacking OS packages

[NW-NG-0  2012-09-24 17:46:07]: Step 11 of 20 Mounting jbase package

[NW-NG-0  2012-09-24 17:46:33]: Step 12 of 20 Creating base OS symbolic links

[NW-NG-0  2012-09-24 17:47:33]: Step 13 of 20 Creating fstab

[NW-NG-0  2012-09-24 17:47:33]: Step 14 of 20 Creating new system files

[NW-NG-0  2012-09-24 17:47:34]: Step 15 of 20 Adding jbundle package

[NW-NG-0  2012-09-24 17:50:07]: Step 16 of 20 Backing up system data

[NW-NG-0  2012-09-24 17:50:08]: Step 17 of 20 Setting up shared partition data

[NW-NG-0  2012-09-24 17:50:09]: Step 18 of 20 Checking package sanity in installation

[NW-NG-0  2012-09-24 17:50:09]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[NW-NG-0  2012-09-24 17:50:12]: Step 20 of 20 Setting da0s2 as new active partition

[NW-NG-0  2012-09-24 17:50:23]: ——- P6136-C ——-

[NW-NG-0  2012-09-24 17:50:23]: Step 1 of 20 Creating temporary file system

[NW-NG-0  2012-09-24 17:50:23]: Step 2 of 20 Determining installation source

[NW-NG-0  2012-09-24 17:50:23]: Step 3 of 20 Processing format options

[NW-NG-0  2012-09-24 17:50:23]: Step 4 of 20 Determining installation slice

[NW-NG-0  2012-09-24 17:50:23]: Step 5 of 20 Creating and labeling new slices

[NW-NG-0  2012-09-24 17:50:23]: Step 6 of 20 Create and mount new file system

[NW-NG-0  2012-09-24 17:50:23]: Step 7 of 20 Getting OS bundles

[NW-NG-0  2012-09-24 17:50:23]: Step 8 of 20 Updating recovery media

[NW-NG-0  2012-09-24 17:50:23]: Step 9 of 20 Extracting incoming image

[NW-NG-0  2012-09-24 17:50:23]: Step 10 of 20 Unpacking OS packages

[NW-NG-0  2012-09-24 17:50:23]: Step 11 of 20 Mounting jbase package

[NW-NG-0  2012-09-24 17:50:23]: Step 12 of 20 Creating base OS symbolic links

[NW-NG-0  2012-09-24 17:50:23]: Step 13 of 20 Creating fstab

[NW-NG-0  2012-09-24 17:50:23]: Step 14 of 20 Creating new system files

[NW-NG-0  2012-09-24 17:50:23]: Step 15 of 20 Adding jbundle package

[NW-NG-0  2012-09-24 17:50:23]: Step 16 of 20 Backing up system data

[NW-NG-0  2012-09-24 17:50:23]: Step 17 of 20 Setting up shared partition data

[NW-NG-0  2012-09-24 17:50:23]: Step 18 of 20 Checking package sanity in installation

[NW-NG-0  2012-09-24 17:50:23]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[NW-NG-0  2012-09-24 17:50:23]: Step 20 of 20 Setting da0s2 as new active partition

[NW-NG-0  2012-09-24 17:50:27]: Step 1 of 20 Creating temporary file system

[NW-NG-0  2012-09-24 17:50:27]: Step 2 of 20 Determining installation source

[NW-NG-0  2012-09-24 17:50:27]: Step 3 of 20 Processing format options

[NW-NG-0  2012-09-24 17:50:27]: Step 4 of 20 Determining installation slice

[NW-NG-0  2012-09-24 17:50:27]: Step 5 of 20 Creating and labeling new slices

[NW-NG-0  2012-09-24 17:50:27]: Step 6 of 20 Create and mount new file system

[NW-NG-0  2012-09-24 17:50:27]: Step 7 of 20 Getting OS bundles

[NW-NG-0  2012-09-24 17:50:27]: Step 8 of 20 Updating recovery media

[NW-NG-0  2012-09-24 17:50:27]: Step 9 of 20 Extracting incoming image

[NW-NG-0  2012-09-24 17:50:27]: Step 10 of 20 Unpacking OS packages

[NW-NG-0  2012-09-24 17:50:27]: Step 11 of 20 Mounting jbase package

[NW-NG-0  2012-09-24 17:50:27]: Step 12 of 20 Creating base OS symbolic links

[NW-NG-0  2012-09-24 17:50:27]: Step 13 of 20 Creating fstab

[NW-NG-0  2012-09-24 17:50:27]: Step 14 of 20 Creating new system files

[NW-NG-0  2012-09-24 17:50:27]: Step 15 of 20 Adding jbundle package

[NW-NG-0  2012-09-24 17:50:27]: Step 16 of 20 Backing up system data

[NW-NG-0  2012-09-24 17:50:27]: Step 17 of 20 Setting up shared partition data

[NW-NG-0  2012-09-24 17:50:27]: Step 18 of 20 Checking package sanity in installation

[NW-NG-0  2012-09-24 17:50:27]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[NW-NG-0  2012-09-24 17:50:27]: Step 20 of 20 Setting da0s2 as new active partition

[NW-NG-0  2012-09-24 17:50:27]: Step 1 of 20 Creating temporary file system

[NW-NG-0  2012-09-24 17:50:27]: Step 2 of 20 Determining installation source

[NW-NG-0  2012-09-24 17:50:27]: Step 3 of 20 Processing format options

[NW-NG-0  2012-09-24 17:50:27]: Step 4 of 20 Determining installation slice

[NW-NG-0  2012-09-24 17:50:27]: Step 5 of 20 Creating and labeling new slices

[NW-NG-0  2012-09-24 17:50:27]: Step 6 of 20 Create and mount new file system

[NW-NG-0  2012-09-24 17:50:27]: Step 7 of 20 Getting OS bundles

[NW-NG-0  2012-09-24 17:50:27]: Step 8 of 20 Updating recovery media

[NW-NG-0  2012-09-24 17:50:27]: Step 9 of 20 Extracting incoming image

[NW-NG-0  2012-09-24 17:50:27]: Step 10 of 20 Unpacking OS packages

[NW-NG-0  2012-09-24 17:50:27]: Step 11 of 20 Mounting jbase package

[NW-NG-0  2012-09-24 17:50:27]: Step 12 of 20 Creating base OS symbolic links

[NW-NG-0  2012-09-24 17:50:27]: Step 13 of 20 Creating fstab

[NW-NG-0  2012-09-24 17:50:27]: Step 14 of 20 Creating new system files

[NW-NG-0  2012-09-24 17:50:27]: Step 15 of 20 Adding jbundle package

[NW-NG-0  2012-09-24 17:50:27]: Step 16 of 20 Backing up system data

[NW-NG-0  2012-09-24 17:50:27]: Step 17 of 20 Setting up shared partition data

[NW-NG-0  2012-09-24 17:50:27]: Step 18 of 20 Checking package sanity in installation

[NW-NG-0  2012-09-24 17:50:27]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[NW-NG-0  2012-09-24 17:50:27]: Step 20 of 20 Setting da0s2 as new active partition

[NW-NG-0  2012-09-24 17:50:27]: Starting with package ftp://169.254.0.3/pub/images/12.2X50-D20.4/jinstall-dc-re.tgz

[NW-NG-0  2012-09-24 17:50:27]: Retrieving package

[NW-NG-0  2012-09-24 17:51:35]: Pushing bundle to re0

[NW-NG-0  2012-09-24 17:52:09]: re0: Validate package…

[NW-NG-0  2012-09-24 17:53:56]: re1: Validate package…

[NW-NG-0  2012-09-24 17:55:53]: Rebooting Backup RE

[NW-NG-0  2012-09-24 17:59:56]: Initiating Chassis In-Service-Upgrade

[NW-NG-0  2012-09-24 18:00:16]: Upgrading group: 2 fpc: 2

[NW-NG-0  2012-09-24 18:10:08]: Upgrade complete for group:2

[NW-NG-0  2012-09-24 18:10:08]: Upgrading group: 3 fpc: 3

[NW-NG-0  2012-09-24 18:19:58]: Upgrade complete for group:3

[NW-NG-0  2012-09-24 18:19:58]: Upgrading group: 4 fpc: 4

[NW-NG-0  2012-09-24 18:29:45]: Upgrade complete for group:4

[NW-NG-0  2012-09-24 18:29:45]: Upgrading group: 5 fpc: 5

[NW-NG-0  2012-09-24 18:39:32]: Upgrade complete for group:5

[NW-NG-0  2012-09-24 18:39:32]: Finished processing all upgrade groups, last group :5

[NW-NG-0  2012-09-24 18:39:37]: Preparing for Switchover

[NW-NG-0  2012-09-24 18:39:54]: Switchover Completed

[Status   2012-09-24 18:39:54]: Upgrade completed with 0 errors

Success

root@FSASYDBRDQFAB01> …0-D20.4.rpm node-group RSNG01

Upgrading target(s): RSNG01

[RSNG01   2012-09-25 11:44:47]: Starting with package ftp://169.254.0.3/pub/images/12.2X50-D20.4/jinstall-qfx.tgz

[RSNG01   2012-09-25 11:44:47]: Retrieving package

[RSNG01   2012-09-25 11:46:55]: Pushing bundle to P6167-C

[RSNG01   2012-09-25 11:47:27]: P6167-C: Validate package…

[RSNG01   2012-09-25 11:53:38]: P6185-C: Validate package…

[RSNG01   2012-09-25 11:54:16]: ——- P6167-C ——-

[RSNG01   2012-09-25 11:54:53]: Step 1 of 20 Creating temporary file system

[RSNG01   2012-09-25 11:54:53]: Step 2 of 20 Determining installation source

[RSNG01   2012-09-25 11:54:54]: Step 3 of 20 Processing format options

[RSNG01   2012-09-25 11:54:54]: Step 4 of 20 Determining installation slice

[RSNG01   2012-09-25 11:54:55]: Step 5 of 20 Creating and labeling new slices

[RSNG01   2012-09-25 11:54:55]: Step 6 of 20 Create and mount new file system

[RSNG01   2012-09-25 11:55:03]: Step 7 of 20 Getting OS bundles

[RSNG01   2012-09-25 11:55:03]: Step 8 of 20 Updating recovery media

[RSNG01   2012-09-25 11:55:25]: Step 9 of 20 Extracting incoming image

[RSNG01   2012-09-25 11:56:40]: Step 10 of 20 Unpacking OS packages

[RSNG01   2012-09-25 11:56:45]: Step 11 of 20 Mounting jbase package

[RSNG01   2012-09-25 11:57:09]: Step 12 of 20 Creating base OS symbolic links

[RSNG01   2012-09-25 11:58:10]: Step 13 of 20 Creating fstab

[RSNG01   2012-09-25 11:58:11]: Step 14 of 20 Creating new system files

[RSNG01   2012-09-25 11:58:11]: Step 15 of 20 Adding jbundle package

[RSNG01   2012-09-25 12:00:48]: Step 16 of 20 Backing up system data

[RSNG01   2012-09-25 12:00:50]: Step 17 of 20 Setting up shared partition data

[RSNG01   2012-09-25 12:00:50]: Step 18 of 20 Checking package sanity in installation

[RSNG01   2012-09-25 12:00:50]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[RSNG01   2012-09-25 12:00:54]: Step 20 of 20 Setting da0s2 as new active partition

[RSNG01   2012-09-25 12:01:05]: ——- P6185-C – master ——-

[RSNG01   2012-09-25 12:01:05]: Step 1 of 20 Creating temporary file system

[RSNG01   2012-09-25 12:01:05]: Step 2 of 20 Determining installation source

[RSNG01   2012-09-25 12:01:05]: Step 3 of 20 Processing format options

[RSNG01   2012-09-25 12:01:05]: Step 4 of 20 Determining installation slice

[RSNG01   2012-09-25 12:01:05]: Step 5 of 20 Creating and labeling new slices

[RSNG01   2012-09-25 12:01:05]: Step 6 of 20 Create and mount new file system

[RSNG01   2012-09-25 12:01:05]: Step 7 of 20 Getting OS bundles

[RSNG01   2012-09-25 12:01:05]: Step 8 of 20 Updating recovery media

[RSNG01   2012-09-25 12:01:05]: Step 9 of 20 Extracting incoming image

[RSNG01   2012-09-25 12:01:05]: Step 10 of 20 Unpacking OS packages

[RSNG01   2012-09-25 12:01:05]: Step 11 of 20 Mounting jbase package

[RSNG01   2012-09-25 12:01:05]: Step 12 of 20 Creating base OS symbolic links

[RSNG01   2012-09-25 12:01:05]: Step 13 of 20 Creating fstab

[RSNG01   2012-09-25 12:01:05]: Step 14 of 20 Creating new system files

[RSNG01   2012-09-25 12:01:05]: Step 15 of 20 Adding jbundle package

[RSNG01   2012-09-25 12:01:05]: Step 16 of 20 Backing up system data

[RSNG01   2012-09-25 12:01:05]: Step 17 of 20 Setting up shared partition data

[RSNG01   2012-09-25 12:01:05]: Step 18 of 20 Checking package sanity in installation

[RSNG01   2012-09-25 12:01:05]: Step 19 of 20 Unmounting and cleaning up temporary file systems

[RSNG01   2012-09-25 12:01:05]: Step 20 of 20 Setting da0s2 as new active partition

[RSNG01   2012-09-25 12:01:51]: Rebooting Backup RE

[RSNG01   2012-09-25 12:01:51]: ——- Rebooting P6167-C ——-

[RSNG01   2012-09-25 12:08:49]: Initiating Chassis In-Service-Upgrade

[RSNG01   2012-09-25 12:09:09]: Upgrading group: 0 fpc: 0

[RSNG01   2012-09-25 12:11:15]: Upgrade complete for group:0

[RSNG01   2012-09-25 12:11:15]: Upgrading group: 1 fpc: 1

[RSNG01   2012-09-25 12:13:20]: Upgrade complete for group:1

[RSNG01   2012-09-25 12:13:20]: Finished processing all upgrade groups, last group :1

[RSNG01   2012-09-25 12:13:24]: Preparing for Switchover

[RSNG01   2012-09-25 12:14:15]: Switchover Completed

[Status   2012-09-25 12:14:15]: Upgrade completed with 0 errors

Success

Conclusion

The NSSU QFabric upgrade is a very simple and well polished process. Apart from being very time consuming, it’s great and I really like how it’s been designed and implemented. It’s quite verbose and keeps the operator well informed, which I like, loving knowing what is actually going on. I also like (some may argue this is bad) the automatic upgrade of each SCB on the Interconnects and each slice on the nodes, saving that extra step post upgrade, but does make rollback harder.

Well done Juniper, this is another great part of the QFabric Solution!

P.s. Just give me a ssh client and automatic system archival.

Tags: , , , , , , , , , , , ,

 
Comments

SRX Branch Chassis Cluster Ports

Posted by cooper on May 12, 2012 in g33k, juniper

Here is a table of the ports that are used for chassis cluster control link and management ports on Branch SRX devices.

The quoted ports are the ‘stand alone’ non clustered port names (not node1’s port names once clustered). In a SRX cluster the PIM slots on node1 start at the last PIM slot of node0 + 1. For example, a SRX240 cluster’s node1 starts at PIM 5. It’s control link port is effectively ge-5/0/1).

Model FXP0 (Management) FXP1 (Control Link)
SRX100 fe-0/0/6 fe-0/0/7
SRX210 fe-0/0/6 fe-0/0/7
SRX220 ge-0/0/6 (> 11.0) ge-0/0/7
SRX240 ge-0/0/0 ge-0/0/1
SRX550 ge-0/0/0 ge-0/0/1
SRX650 ge-0/0/0 ge-0/0/1

 *fab0 and fab1 interfaces (Data Link) are always configurable, e.g.:

  • set interfaces fab0 fabric-options member-interfaces ge-0/0/2
  • set interfaces fab1 fabric-options member-interfaces ge-5/0/2

Tags: , , , , , , , , , , , ,

 
Comments

Backup your Junos configs TODAY !

Posted by cooper on May 8, 2012 in g33k, juniper

Cooper’s tip of the moment, ALWAYS backup your Junos configurations. Hate when a customer does not, your router does not have raid (unless it has redundant REs, VC or is in a Chassis Cluster :)). It’s a built in feature of Junos so use it! It even allows multiple sites, so if you have DR site with storage – Push it there too!

Here is the conf:

set system archival configuration transfer-on-commit
set system archival configuration archive-sites "scp://junos@x.x.x.x/data/configs/DEVICE" password "bla"
set system archival configuration archive-sites "scp://junos@y.y.y.y/data/configs/DEVICE" password "bla"

More info: http://www.juniper.net/techpubs/en_US/junos9.5/information-products/topic-collections/swconfig-system-basics/junos-software-system-management-router-configuration-archiving.html

Tags: , , , , , , , , ,

 
Comments

QFabric Part 1 – Explained and Explored First Hand

Posted by cooper on Apr 19, 2012 in g33k, juniper

I was lucky enough to be one of the first APAC partner engineers to get my hands on Juniper’s new QFabric gigantic scalable switch technology. I have even beat some of Juniper’s own SEs. In general, it rocks, but does have some features and fine tuning, this will come. This post is an introduction to QFabric, with my likes, dislikes and feature wish-list.

I would like to thanks Juniper APAC and Yeu Kuang Bin for this excellent opportunity and very knowledgable training.

Cooper with a working QFabric

What is QFabric?

The most simple explanation of QFabric I can explain is that it is basically a Juniper EX Virtual Chassis on steroids. The internal workings of the switch have been broken apart to be MUCH MORE scalable and Juniper have insured that there are no single points of failure, only selling the design with fully redundant components.

The QFabric components are:

  • Director Group – 2 x QFX3100 (Control Plane)

  • Interconnects – 2 x QFX3008-I (Backplane / Fabric)
    • 2 REs per Interconnect

  • Nodes (Data Plane)
    • Server Groups – 1 – 2 per group

40GE DAC cable (1m,3m,5m lengths)
40GB – QSFP+ (quad small form-factor pluggable plus) – 40 gig uses MTP connector

QFabric Node Discovery

Control Plane

The control plane is discovered automatically, it depends on being configured with a pre-defined Juniper configuration in order to discover the nodes via a pre-defined method when you turn the QFX3500 into fabric mode.

Data/Fabric Plane

The fabric plan is what makes QFabric as scalable as it is. Once again a predefined HA design is supplied and the directors perform the following tasks:

  1. Discovers, builds & Maintains Topology of the Fabric
  2. Assembles the entire topology
  3. Propagates path information to all entities
NOTE: Interconnects DO NOT interconnect to each other
Node Aliasing
Node aliasing allows administrators to give nodes a meaningful name and is used when talking about specific interfaces for specific nodes or node groups
  • Id the nodes via beaconing (the LCD screen) or serial number on chassis.
  • e.g. set fabric aliases node-device P6969-C NODE-0
    • This name is used to reference ports and assign the node to a group (discussed next)
Logical Node Groups
Node groups are used to allow the infrastructure to be divided up and allow the director to know what type of cofiguration to push to a nodes routing-engine. The local routing engine still performs some tasks, predominately to allow scale. A group can contain a maximum of 2 nodes. A group with 2 nodes is know as a redundant server group (It is a 2 node virtual chassis under the covers). Due to this, a redundant server group can have multi-chassis ae (aggregated ethernet) interfaces. There is one other type of group known as the Network node group. This group looks after all routing and l2 loop information, such as OSPF and spanning tree. All vlan routing etc. is done by these nodes.
Group Summary
  1. Network Node Group (1 per QFabric – Max 8 nodes)
  2. Server Group (Redundant Server Group optional – 2 nodes)
    1. Qfabric automatically creates a redundant server group if two nodes exist in a server group (via a form of virtual chassis).
Port Referencing
Now cause each node has an ‘alias’ (discussed above) to reference a port in configuration you now use:
  • NODE_ALIAS:INT_TYPE-x/x/x.x
  • e.g. NODE-0:xe-0/0/1.0

Aggregated interfaces can be deployed – Across chassis in a redundant server group or on one chassis in a server group:

  • GROUP_NAME:ae0.0
  • e.g. RACK-42-1:ae0.0
QFabric can also function with port in FC and FCoE mode. There are some limitations to this feature today, but can provide an excellent mechanism to create redundant paths back through the Fabric to the SAN FC based network. This will be discussed in a dedicated post in my QFabric series.
Summary
QFabric, for a Data Center is ready today and works extremely well. It can allow a HUGE number of 10gb (and soon to be 40gb) ports to allow huge data movement around a DC at low latency. It is also effectively one single point of management for all your nodes, unless something goes wrong of course. For a campus, with end users, QFabric does not have many key features that we use today either with the MX or EX range. It could be used for large campuses as the aggregation or core (especially when more IPv4 and IPv6 routing is supported) and feed 10gb out to EX switches to provide the ‘edge’. The coming ‘micro’ fabric is also interesting, which will allow for a more compelling footprint within a smaller data center.
Key Likes
  • Single switch in regards to management and functionalty
    • No TRILL or other L2 bridging redundancy protocols required
  • Ultra redundant design – Enforced by Juniper
    • No half way deployment, people can’t go in half assed !
  • The simple well thought out HA deployment/design – Common install = easier to debug for JTAC / Engineers like myself
  • Scalability – Can see how big DCs could benefit from having 1 gigantic switch
  • Road map looks good – Key features and hardware are coming
Key Dislikes
  • AFL (Advanced Feature License) required for IPv6 (when it arrives)
    • PLEASE Juniper – Can we have IPv6 for free or I will never get customers to deploy it
    • This really frustrates me … You may be able to tell 🙂
  • Limitation of 1 unit per interface
    • No vlan tagging and multiple units in Network Groups
    • Can work around by turning port into trunk and assigning multiple L3 interfaces
  • The need for legacy SAN infrastructure in order to use FC/FCoE (discussed in part 3)
  • No ability to have a full 48 Copper SFP 1gb interfaces in a node for legacy non 10gig equipment
    • The QFX3500 can not fit physically the SFPs in top and bottom rows
    • This could be handy to keep legacy equipment and as it’s replaced change the SFP to a 10g SFP+
Wish List
  • The Micro Fabric – will allow more use cases
  • Full SNMP interface statistics for all nodes through the director
    • Currently testing this with Zenoss in the Juniper Lab – Has not worked so far
    • The ability to ensure node’s RE’s and PSU etc. are also a plus (have not tested / read the MIBs yet – so could be possible)
  • Be able to downgrade and system wide request system rollback from the director
  • Full Q-in-Q Support
  • Fully self contained FC/FCoE support
To Come in this series:
Part 2 – Deploying and Configuring
Part 3 – FCoE and SAN with QFabric
Part 4 – QFabric eratta (possibly – not sure yet …)

Please note: The information presented here is from my own point of view. It is no way associated with the firm beliefs of Juniper Networks (TM) or ICT Networks (TM).

Tags: , , , , , , ,

 
Comments

Junos Aggregated Ethernet w/LACP and Cisco Nexus Virtual Port Channel

Posted by cooper on Apr 17, 2012 in cisco, g33k, juniper

So when I was googiling around looking for working configurations of Junos (EX in this case) AE working with a Cisco vPC (Virtual Port Channel) I could not find any examples … So I said that I would post one. I will not be covering how to set up a VPC, if you’re interested in that side visit Cisco’s guide here. I will also not discuss how to configure a Juniper Virtual Chassis (more info here). The devices used in this example are 2 x Cisco 7k (running NX-OS 4) and 2 x Juniper EX4500 switches (running Junos 11.4R1) in a Mixed Mode virtual chassis with 2 x ex4200s.

The goal, as network engineers is to use all bandwidth when it’s available (if feasible) and avoid legacy protocols to stop layer 2 loops such as Spanning-Tree. vPC from Cisco and VC technologies allow LACP (Link Control Aggregation Protocol) links to span physical chassis, allow the network engineer to avoid single points of failure and harness all available bandwidth. If a physical chassis was lost, you would still be operation in a degraded fashion, e.g. 1/2 the available bandwidth until the second chassis returned.

To configure the Cisco Nexus side you would require the following configuration on each vPC configured chassis. I found that VLAN pruning can be happily done and a Natvie VLAN1 is not needed if CDP is not mandatory (I did not test making CDP able to traverse the trunk through the Juniper – Would love to hear if someone does!).

conf t

interface port-channel69
  description Good practice
  switchport mode trunk
  vpc 69
  mtu 9216
  switchport trunk allowed vlan 69

interface Ethernetx/x
  channel-group 69 mode active

Handy Cisco Debug Commands:

  • show vpc
  • show run interface port-channel69 member
  • show vpc consistency-parameters int port-channel 69
  • show port-channel summary

The Juniper side would only require the following, this configuration is identical (you just choose different member interfaces) even if you don’t have a Virtual Chassis configuration.

set interfaces xe-0/0/39 ether-options 802.3ad ae0
set interfaces xe-1/0/39 ether-options 802.3ad ae0
set interfaces ae0 description "Good Practice"
set interfaces ae0 mtu 9216
set interfaces ae0 aggregated-ether-options lacp active
set interfaces ae0 unit 0 family ethernet-switching port-mode trunk
set interfaces ae0 unit 0 family ethernet-switching vlan members pr0nNet

set vlans pr0nNet vlan-id 69
set vlans pr0nNet l3-interface vlan.69 #If a L3 RVI is required

Handy Juniper Debug Commands:

  • show interface terse ae0
  • show lacp interfaces (you want your interfaces to be collecting and distributing)
  • show interface ae0 extensive

Please let me know if I have done anything that is not optimal – always eager to learn, I am definitely not (and proud of it) a Cisco expert.

Tags: , , , , , , , , , , , , , ,

 
Comments

Juniper SRX Screens + Dynamic VPNs

Posted by cooper on Mar 3, 2012 in g33k, juniper

Little tip with SRX Dynamic VPNs and ‘security screens’ on the VPN’s ingress zone I stumbled across during my JNCIE-SEC study.

UPDATE (20120401): Seems Juniper has addressed and fixed this bug …
More info:
http://kb.juniper.net/InfoCenter/index?page=content&id=KB21713&actp=RSS 

It seems you can not have the ‘IP Spoofing’ screen enabled when sending IPSec Dynamic VPN traffic ingressing into the zone with the screen set. This traffic is dropped by the screen which can be seen via a ‘security flow traceoption flag basic-datapath’:

  • ‘packet dropped, drop by spoofing check.’

So removing (or deactivating) the ip spoofing check solved the problem:

  • deactivate security screen ids-option from-Internet ip spoofing

Kind of lame, the spoofing screen sounds a good idea on your Internet facing interfaces, but seems a no no if you want dynamic VPNs. That is all. Hopefully eventually Juniper make this check smarter.

Tags: , , , , , , , , , ,

 
Comments

Valentines – Junos Style !

Posted by cooper on Feb 15, 2012 in g33k, juniper

Awesome – This would get the chicks …

Junos Valentines

Tags: , , , ,

 
Comments

Microsoft NPS Server + Juniper JUNOS VSA

Posted by cooper on Oct 5, 2011 in g33k, juniper

A lot of companies run Microsoft’s Active Directory AAA infrastructure. A nice add on to AD (apart from my favorite ‘Services for UNIX’) is the Network and Policy Server (NPS). Using this RADIUS server with any radius speaking client is a nice addon that allows the majority of Network infrastructure to use AD as it’s authoriative authentication source. Using NPS as the souce will allow new users to obtain access to the box without the need for configuration on all the infrastrucutre devices individually, scales and disables users access when they leave the organisation (local accounts tend to be forgotten).

Finding documentation on using NPS with JUNOS was difficult, so here is how I have got it to work:

First we need the Juniper Vedor Code and attribute to send to your JUNOS device:

Juniper Vendor ID:
2636
RADIUS Attribute to specify account name (id):
Juniper-Local-User-Name (1)

Then we need to configure a RADIUS client in NPS, then configure the JUNOS side and finally define a ‘Connection Request Policy’ (More information here visit this post)

Once the connection request policy is defined we now need a ‘Network Request Policy’. This will allow the use of AD groups (amoungst other attributes) to define which template account that is defined locally on the JUNOS device to map the user to. Please refer to the previous NPS post for more information on configuring a Network request policy.

To add the custom VSA navigate to the “Network Policies” section in the NPS MMC, go to properties of the policy you wish to add the VSA to and navigate to the ‘Settings’ tab. 
Select ‘Vendor Specific’ under attributes and then click add. Then select ‘Custom’ from the drop down list, select Vendor-Specific and click add:

Now select add and enter the following:

 

The device will now send the defined ‘USERNAME’ that is required to be defined locally on each JUNOS device that speaks to this radius server.

If there is no match, JUNOS will fall back to the default remote authentication server template user ‘remote’. I reccomend setting this to unauthorised so that if a user not in required groups gets authenticated due to bad NPS polices can not obtain any useful access to the JUNOS device.

Please let me know how you go and if I have made any boo boos in my post.
The above was tested with JUNOS 11.2r2.4 and Windows Server 2008 R2.

Tags: , , , , , , , , , , , ,

Copyright © 2017 I-R-Coops Blog All rights reserved. Theme by Laptop Geek.