Trigger is an open source network automation framework with batteries included. Trigger differs from other automation projects such as Ansible, which is based on top of the Python SSH library Paramiko, by being built with the asynchronous I/O framework Twisted. Twisted allows for a staggering number of devices to be managed concurrently thanks to the asynchronous socket functionality available in most modern Operating Systems and a handy package named conch.
In this post I’m going to outline some of the Trigger features that are relevant to today’s network management challenges. I’ll provide some code and detail “just enough” Python to get you started on your journey to automating at the speed of light!
A full list of network hardware platforms Trigger supports can be found here: https://trigger.readthedocs.io/en/latest/platforms.html
However you may find if your device provides a Cisco-like shell environment and is not on that list it might still work (with some serious prompt string regular expression matching).
So why is asynchronous execution important? Aside from squeezing our changes into those tiny change windows the ITIL guys set, asynchronous execution allows us to implement changes faster and with the added bonus of shared state.
Once your changes start taking hours and not minutes to complete you start seeking out ways to break up the work into smaller chunks to be executed in parallel. This is a fine and worthy cause and any normal person would assume that making use of all those extra cores on your CPU should do the trick. What we are discussing now is a threaded solution. The threaded solution allows the engineer to define a set of workers and set them to the task of connecting to an endpoint, whether that be through API or SSH and proceed to execute the current task. There are two issues with this model:
- Connections will be held up when managing slow or lossy links. A thread will either be moved out of context, a term called context-switching which is bad; , or block until information has been fully read from the device connected to the slow circuit.
- Managing threads is complicated. If you don’t believe me this to cement this fact in. Why waste cycles managing concurrency when you can leverage other people’s good work!
We aren’t working with large streams of data. We typically aim to push smallish blobs of text to and from our endpoints. This means deferring data processing to a separate thread of execution is overkill.
A different approach to this is to borrow from what web developers have been doing for years: non-blocking sockets. An event driven execution model solves these two problems by:
- Sockets will only be acted upon when data is ready. This means for every read and every write to a socket (in our case this is when SSH CLI command results has reached our terminal session) a callback will be executed upon that data. This eliminates the need for a thread to stay idle waiting for data to be received off a slow or lossy link. The callback will be triggered as soon as it’s ready.
- We can make use of the Twisted I/O library. Twisted is a solid and dependable framework with over 10 years of development. Twisted supports all the usual clients such as HTTP, SSH, Telnet and whatever else you want that runs on UDP/TCP if you have the time and patience to implement it. Although modern python supports asynchronous execution through the asyncio PEP features, they do not provide the robust client protocols that Twisted provides.
Reacting to the Unexpected
Using an event-driven architecture allows us to make informed decisions based on the output of a given command across all our target endpoints during a deployment. This is because our event handler is contextually aware of all connections and data. If an exception is raised on our spoke end router then we can change the execution to perhaps, skip the configuration revision on our head end router and send a log to our log handler.
Security Baked In
So you’ve just started automating your networks. Your scripts/ directory is full to the brim of automated treasures. How many of you could say these scripts were either a) scheduled on the command-line with credentials passed as arguments or b) credentials are hard-coded into the scripts themselves?
Trigger has an inbuilt tacacsrc module. This module handles encrypted caching of your login credentials. It is namespaced so you are able to have multiple sets of credentials for different realms. There are two methods of encryption in play. The first being a passphrase (not so secure I know) and the second being GPG encryption (better).
Kicking the Tyres
So you’re on the phone to TAC. The TAC engineer asks you for the output of show tech-support plus some other verification commands across your entire fleet that are manifesting issues. Normally this is a chore. You set up a TFTP server, you log into each device, run each command like: show tech-support | redirect tftp://x.x.x.x/show-tech-support.txt , compress and upload.
What if I told you, you could collect all the verification output from all your devices in roughly 6 seconds? And all it takes is a simple python script:
from trigger.cmds import Commando
vendors = ['cisco']
commands = ['show tech-support']
if __name__ == '__main__':
device_list = ['p1.demo.localdomain',
showtechsupport = ShowTechSupport(devices=device_list)
showtechsupport.run() # Commando exposes this to start the event loop
results = showtechsupport.results
for host in results:
print host, results[host]
In the next post I am going to run you through how to setup your Trigger environment. In subsequent posts I will guide you through develop your first scripts and showcase some of the advanced features that Trigger has to offer.