BGPSEC is a set of BGP extensions being developed by the SIDR working group of the IETF to improve the security of the Internet’s routing infrastructure. So far in this series, we’ve looked at the basic operation of BGPSEC, the protections offered, and then the first set of performance issues — how do we prevent replay attacks of signed routing information (by putting a timer in the update), and how that solution interacts with how BGPSEC will act in terms of performance in the real world. Now we want to look at the more obvious problem with BGPSEC performance — what about those signatures?
The first issue to contend with is the sheer size of each signature. How large is a X.501 certificate? What’s the size of an update compared to the current update size? What does this mean in terms of performance? The simplest question to answer is the size of the signature — the most explicit estimate I could find is each signature is 256 octets of new data. This means, according to the slides just linked, that the average table size will be 15 times larger (though I’ve seen estimates of 64 times larger in other contexts). What does even the smaller increase in size — 15x — mean for BGP performance? Let’s put the question a different way: how much time differential is there between transferring a 1GB file and a 15GB file? We can expect similar time differentials in initial BGP convergence if BGPSEC is fully deployed. It’s not actually 15 times longer, because the larger file size will hit more TCP slow starts over its lifetime, encounter more potential errors, run into the headroom allowed for transfer buffers, etc.
But the problem is actually worse than this. BGP normally uses a sort of “dictionary compression” scheme to reduce the amount of information and the number of segments transferred during large convergence events. This scheme is called update packing, and it works something like this —
Assume you have two prefixes you need to advertise to a peer, 22.214.171.124/25 and 192.0.2.128/25. Each of them has the same set of attributes — AS Path, community strings, MED, and other information. You could, of course, simply send two updates. This requires sending the set of attributes twice, in two different segments, between the two BGP speakers. But size isn’t the only consideration — there’s also the number of TCP segments transmitted, and the attendant amount of processing that goes with each segment.
To reduce both the size of the updates, and the number of segments transmitted, BGP packs its updates. A BGP speaker will examine the set of updates currently sitting in the queue and combine all the reachable destinations (NLRIs) with a common set of attributes into a single update. Hence, the BGP update on the wire will have one set of attributes, followed by a set of NLRIs (reachable destinations).
What does including signatures into the update process look like? To understand the answer to this question, remember that the hash is computed across the NLRI (reachable destination) — and the signature is carried as an attribute. The result is every NLRI now has a unique set of attributes. Goodbye, packing. BGPSEC proponents claim this problem can be worked out over time — more efficient coding mechanisms can be proposed and tested. The problem is anything proposed in this space has the unfortunate effect of forcing a redesign of the entire BGP packet format, which causes lots of deployment problems. Essentially, the only way to solve this problem is to replace BGP4 with BGP5, or some similar analog.
So BGPSEC not only forces the entire table to be (at least) 15 times larger, it also removes BGP’s built in compression mechanisms. The only result we can expect, in the real world, is a massive decrease in BGP performance — in a world where faster convergence is becoming more important, rather than less.
To this point, we’ve not talked about the increased processing required to actually validate a signature chain — given I’m not a crypto expert, it’s not a topic I’m going to address in any detail. I will suggest, however, that this is a major problem, as these are public key signatures, the most difficult signatures to validate.
So, performance wise, we have what appears to be a perfect storm of bad things in BGPSEC. Updates must include a timer, which requires a constant stream of updates across a very large routing table. These updates will be at least 15 times larger than they are today, and cannot be compressed using current BGP techniques. And each update will require a great deal more power to actually process.
Next time, we’ll discuss route leaks — another entire area of concern.