I’ve always advised my clients to carefully plan the implementation of IPv6. The protocol opens new attack vectors on which ne’er-do-wells can assault your infrastructure. There are countless examples I’ve seen such as service providers locking down access to routers using IPv4 transport but leaving IPv6 transport completely open. About a year ago, I stumbled onto what I consider to be a serious issue with many IaaS providers.
When I rent an instance from an IaaS provider, I expect to have complete control over what IPv4 or IPv6 source addresses can reach my instance from the Internet or from other customer’s instances. The control should exist at the infrastructure level; changing the security policy is typically performed using an API or GUI. I doubt many people assume that they must implement security policy within the instance using iptables/ip6tables or equivalent.
The provider can implement multi-tenancy separation in many ways. In the most simple form, the security policy can be implemented in the hypervisor using iptables, ip6tables, or ebtables. I haven’t seen security gaps with regard to Internet to instance or cross-tenant instance to instance…using IPv4 transport.
Unfortunately, many providers are not properly protecting their customers’ instances from being accessed from other tenants’ instances using IPv6 transport. In some senses, IPv6 is just 96 more bits but–in many critical ways–it is not.
Let’s take multicast. IPv6 uses multicast for purposes that IPv4 used broadcast. Unless you’ve worked extensively with IPv6, you may not realize the implications. IPv6 uses multicast addresses in FF00::/8. Several addresses are well-known and referenced in the RFCs.
|ff02::1||All nodes on the local network segment|
|ff02::2||All routers on the local network segment|
|ff02::5||OSPFv3 All SPF routers|
|ff02::6||OSPFv3 All DR routers|
|ff02::8||IS-IS for IPv6 routers|
|ff02::16||MLDv2 reports (defined in RFC 3810)|
|ff02::1:2||All DHCP servers and relay agents on the local network segment (defined in RFC 3315)|
|ff02::1:3||All LLMNR hosts on the local network segment (defined in RFC 4795)|
|ff05::1:3||All DHCP servers on the local network site (defined in RFC 3315)|
|ff0x::c||Simple Service Discovery Protocol|
|ff0x::101||Network Time Protocol|
|ff0x::108||Network Information Service|
|ff0x::114||Used for experiments|
Source: Wikipedia (http://en.wikipedia.org/wiki/Multicast_address#IPv6)
Anyone see where this post is leading? Perhaps you can guess the problem. Many IaaS providers are not securing inter-tenant instance connectivity for IPv6 multicast or link local destinations.
Here’s a quick way to test. Ping the all nodes multicast address (FF02::1).
[email protected]:~$ ping6 -I eth0 ff02::1
PING ff02::1(ff02::1) from fe80::xxxx:d0ff:fe33:e8d eth0: 56 data bytes
64 bytes from fe80::xxxx:d0ff:fe33:e8d: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from fe80::xxxx:d0ff:fe8f:6610: icmp_seq=1 ttl=64 time=6.73 ms (DUP!)
64 bytes from fe80::xxxx:d0ff:fe22:a51e: icmp_seq=1 ttl=64 time=6.74 ms (DUP!)
64 bytes from fe80::xxxx:d0ff:fe4e:fd0d: icmp_seq=1 ttl=64 time=6.74 ms (DUP!)
64 bytes from fe80::xxxx:d0ff:fe00:2b9d: icmp_seq=1 ttl=64 time=6.74 ms (DUP!)
64 bytes from fe80::xxxx:d0ff:fe46:9242: icmp_seq=1 ttl=64 time=6.75 ms (DUP!)
Note that you must use the appropriate outgoing interfaces or the instance will not know how to route the locally scoped multicast address. If the provider has not locked down IPv6, you will see echo replies from every instance in the Layer 2 domain. For some providers, this encompasses the entire availability zone.
Let’s consider the implications. I can spin up an instance on many public IaaS providers with a credit card and five minutes of my time. When the vulnerability exists, I can use sed/awk to parse the output of the ping to build a list of the link local addresses belonging to other instances. Applications that bind to any IPv6 address bind to any IPv6 address…including link local. Port scanning the address list offers the attacker many helpful hints as to how other instances might be compromised.
You may be thinking, “My public cloud provider doesn’t offer IPv6 connectivity. My instances are safe.” This reasoning is invalid. Routing IPv6 packets using locally scoped address on the same Layer 2 domain does not imply instance connectivity to the IPv6 Internet. I have yet to notice this problem on a provider that offers public IPv6 connectivity. Coincidence? Probably not.
I’ve encountered the defect on IaaS providers globally including some IaaS offerings from Tier 1 ISPs. I want to share my observations as this is my first time reporting a vulnerability.
- Some providers just don’t care at all. This is very odd to me. While this issue does not share the severity of the heartbleed flaw, I believe it is a serious one.
- Most providers claim that they already knew about it. Strangely, some in this group immediately fix the problem within a week of my notifying the company.
- Some providers are very appreciative. This type of response is nice to see.
I’ve been notifying providers of this problem privately for about a year. I’ve used many providers, so you may find it difficult to find the issue in the wild. This is a good thing as it is not my intent for crackers to abuse what I’ve shared to disrupt workloads on the public cloud. Because this issue is not hypervisor-specific, I don’t know other ways to get this information out there short of doing so in a public forum.
To IaaS providers out there- please fix this. Your customers need to be able to trust your platform before they transition the majority of workloads to the cloud. I encourage all users of IaaS services to ask your suppliers about multi-tenancy security. A laissez-faire mentality toward inter-tenant separation not good for the industry.