As the website’s name represents, the network is the ScapeGoat of the IT department. Everybody blames the network. Recently I have had two different issues come up that I had to use a packet capture to prove the network was getting the data where it needed to be.
In regards to getting a packet capture, I’m very fortunate to have a hand full of devices in my network that can obtain a packet capture. I have a large number of Cisco ASA Firewall’s, F5 BIG-IP Load Balancers, Cisco NAM module, and of course Wireshark on any laptop we can control. The ability to get a packet capture on these devices has drastically reduced the number of time I have had to go to the data center to get a packet capture, it has become very convenient.
Packet captures are not the easiest to read. You really have to know how TCP or UDP works along with your application. In reality, you need to know enough to prove that your network is working correctly. I can usually accomplish this by doing a simple filter with the source and destination IP’s that you are interested in. It’s easy to see if there is two-way communication or one-way, or none at all.
I recently had an appliance at a remote location fail to connect to the database at the main office. The vendor was complaining that the network was blocking the data flow. I was able to get a capture, find the login packet. I was able to tell the vendor over the phone the username and password they were using. I was also able to tell them how the server responded. This ended the phone call and the vendor knew it wasn’t a network issue.
I could go on and on with examples of times proving that the issue was an application issue, but the important thing is that the Packet Capture doesn’t lie. When all else fails, get a packet capture, learn how to read it and use it to win the blame game!!!
What were you blamed for and proved it wasn’t your network with a packet capture??
If you found this article interesting or helpful, please share it with the share buttons below!!
Recently we had a single supervisor engine crash on a 6513. It actually rebooted once, then went down for good. Google yields lots of info regarding dual-sup issues, but I didnt see much that related to this specific situation. Hopefully this is helpful to others if a similar issue is encountered.
Pair of Catalyst 6513s (6513A, 6513B), SUP720 (Dual chassis, single SUP)
System image file is “sup-bootdisk:s72033-ipservicesk9_wan-mz.122-33.SXI5.bin”
Chassis are trunked together and have downstream access switches connected.
Router Processor reset which essentially caused the SUP to reboot on 6513A:
Sep 3 14:42:35: %C6K_PLATFORM-2-PEER_RESET: RP is being reset by the SP
CRASH TWO (CRASH AND BURN)
The SUP went down and did not recover. It could not load its IOS. The chassis was bricked. HSRP failed over to redundant chassis successfully and impact was minimal, except for a few single homed devices…DOH!
- Power off chassis, power back on.
- Power off chassis, remove and reseat SUP, power back on.
No dice – 6513A still would not boot.
- Engage Cisco TAC to RMA the SUP. The SUP arrived about 2 hours later. SPEEDY!
- We did not want to bring the 6513 online until it was fully configured, so shut down uplink ports to redundant 6513 on both sides.
- 6513A powered down
- Remove old SUP (prior to this, no issues for the last 10 years)
- Install new SUP, old flash card, and power chassis on
- By default, with a new SUP installed, all ports on all line cards are shutdown
- Copy IOS from disk0: (flash card) to sup-bootdisk directory
- Set boot variable:
- BOOT variable = sup-bootdisk:s72033-ipservicesk9_wan-mz.122-33.SXI5.bin
- Use the most recent backed up version of config and apply it
- I had to copy the vlan.dat file from 6513B to get all the layer 2 vlans.
- vlan.dat file had to be stored in const_nvram:/
- Don’t forget crypto key! You must apply manually as it will not come over in your config. You cant SSH with the crypto key. Here is the config:
- crypto key generate rsa
- It will ask you for the key size – I use 1024
- No shut all necessary ports
- TEST TEST TEST
- I found all the downstream access switch uplinks were err-disabled. Had to go in to each access switch and manually shut/no shut the uplink port for all 70 of them!!
Thanks to redundancy, this wasnt too painful. The SUP was replaced, encountered a few gotchas along the way (vlan.dat, crypto key, uplinks err-disabled) but nothing too bad.
If you enjoyed this article, please share it with the social media icons below!!