Recently in IPv4 Exhaustion Category

Earlier today, the Asia-Pacific registry got the last two blocks in the central IPv4

The IANA has been sitting on five /8s (one per regional registry), and these will be handed out (along with the fragments from the legacy class B space), one to each registry. The IANA IPv4 registry doesn't yet reflect this.

This is the first milestone in IPv4 depletion. The regional registries will start to run out later this year. Most likely, Asia-Pacific will run out first, followed by Europe, then North America. The South American and African registries will last longer, as there's much less demand for IPv4 addresses in those regions.

The Asia-Pacific registry estimates it will run out of IPv4 by the end of this summer.

It was fun while it lasted.

Four more gone.

| | Comments (0) | TrackBacks (0)

Four /8s were just allocated from the central, IANA IPv4 pool. (5 and 37 to the European registrar, and 23 and 100 to the North American registrar). Less than 3% of the IPv4 address pool remains.

The central IPv4 pool will be gone before the snow melts here in central Pennsylvania. Its likely that the Asia-Pacific registrar will get another large allocation early next year, and that will be the end of it. Please, start deploying IPv6 now if you haven't already.

Two more IPv4 address blocks were just allocated. Blocks 177 and 181 were allocated to the central and south american registry.

6% of the IPv4 address space is left.

Two more gone

| | Comments (1) | TrackBacks (0)

Late last week, the IANA allocated two more /8s from the IPv4 free pool. Both blocks (14/8 and 223/8) went to the Asia-Pacific registry.

The IANA IPv4 free pool has less than 8% left.

Two more down

| | Comments (1) | TrackBacks (0)

Yesterday, the IANA allocated two /8s to the Asia-Pacific region. The global IPv4 pool is down to 11.7% free:

Except that it's actually a little less than 11.7%. Last month, the "n=1" proposal was adopted by ICANN. This policy sets aside one /8 for each Regional Internet Registries (RIRs):

There are five RIRs. If you exclude the five reserved /8s, the IPv4 pool has only 25 /8s free, or 9.7%. That's a pretty sobering reality.

Absolutely nothing.

KanREN, the Kansas Research & Education Network, is doing a lot with IPv6. As best as I can tell, they're the second Internet2 member to have IPv6-enabled web, email and DNS services (3Rox was first). So, hat's off to them!

This is particularly relevant, since two more IPv4 address blocks were allocated today (blocks 110 and 111, to the Asia Pacific registrar). Only 14% of the IPv4 address space is unallocated.

This week in IPv6

| | Comments (0) | TrackBacks (0)

It's been a busy week for IPv6.

On Tuesday, the Squid web cache announced a beta of version 3.1, the first with IPv6 support. For those of you using Squid 2.x, there are plans to backport this code to release Squid 2 (the work will be spread between 2.8 and 2.9). This has been a long time coming — the code was merged almost a year ago. Given the dearth of IPv6 deployment, dual-stacked web proxies will probably be important during the IPv6 migration.

On Wednesday, UCLA IPv6-enabled its web site:

$ dig -t AAAA

; <<>> DiG 9.4.2-P2 <<>> -t AAAA
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41596
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 1


;; ANSWER SECTION: 272 IN AAAA 2607:f010:3fe:101:101d:9ff:fe32:a7d1

;; AUTHORITY SECTION: 21572 IN NS 21572 IN NS 21572 IN NS


;; Query time: 11 msec
;; SERVER: 2610:8:6800:1::4#53(2610:8:6800:1::4)
;; WHEN: Fri Nov 7 14:19:28 2008
;; MSG SIZE rcvd: 131

As far as I know, UCLA is the first university in Internet2 to take this step. Congratulations!

On Thursday, there were several code commits to FreeBSD to remove IPv4 dependencies. The goal is to let the kernel be compiled without IPv4 support (one can only dream of such a day).

Also, RIPE (the Regional Internet Registry for Europe, Middle East and parts of Asia) published the slides from its meeting last month in Dubai. These were several excellent talks. Several European Internet Exchange Points showed growth in the number of IPv6-enabled customers.

Etisalat, a large Middle East telco, discussed their IPv6 deployment, which has been underway for most of the decade. They're motivated for the usual reasons: The significant increase in Internet-attached devices is exceeding NAT's usefullness. I found it interesting that they've required IPv6 support in new equipment purchases since 2001! I'm really hoping that Penn State adopts a similar policy soon.

Google presented two talks on its on-going IPv6 trial. As a refresher, Google doesn't see NAT as a long-term solution to IPv4 address depletion. In fact, they claim that excessive NAT will have significant negative impact on common web apps. So they've run an IPv6 pilot since March. So far, the results are very encouraging: Only 0.09% of users have broken IPv6 connectivity.

I'm glad to see that Google (among others) is actually gathering empirical data on IPv6 usage, since there is a lot of FUD about there that IPv6 will cause all sorts of breakage. To quote Google, "It's not that broken... don't believe the FUD."

Having said that, things are not perfect. IPv6 routing is often sub-optimal (to be polite). Gert Doering presented on the state of the IPv6 routing table, as asked "Why does traffic from Germany to Germany get routed through the US and Hong Kong? He showed an example of a user in Munich accessing a server in Frankfurt. The traffic went across the Atlantic to Washington, DC, across the continental US to Chicago, then Seattle, then across the Pacific to Hong Kong, then finally back to Germany. This has got to stop.

Google also sees these sort of problems. They had an example of traffic from Virginia to Virginia being routed through Amsterdam. All of this adds significant latency to IPv6, making users disinclined to use it. Fortunately, we know how to fix the problem: We need to start filtering IPv6 routes. We've done this for IPv4 for a while, and it's time to do the same for v6.

So that's the week in IPv6. There was a lot more covered at RIPE-57. I'll blog about that later.

Two more down

| | Comments (0) | TrackBacks (0)

We're down two more /8 blocks in the IANA free pool. About two weeks ago 173/8 and 174/8 were allocated to ARIN. We're down to 42 blocks free:

This really highlights the need to develop a long-term solution to the IPv4 addressing crunch. In 2007, ARIN managed to reclaim three legacy blocks. It took a lot of work, and ARIN isn't optimistic about reclaiming the remaining 41 legacy blocks.

Bottom-line: In one month, we've used up two-thirds of what it took a year to reclaim. That's not sustainable, folks.

In my previous blog entry, I reported at PSU has about 300,000 IPv4 addresses assigned to it, and that ITS manages a little more than half of those. ITS projects that it will exhaust its IPv4 pool sometime in late 2009 or early 2010. This graph is based on data from TNS in January 2008:

The text on the graph is hard to read. Click on it for a larger version.

When we finally exhaust our pool, will we able to go ARIN and get more? That depends on the status of ARIN's IPv4 pool and on any new policy ARIN has adopted. Several of the RIRs have adopted policies which make it much more difficult to obtain additional IPv4 space. I believe that ARIN will still have addresses left, but we probably won't be able to get space due to policy changes. If that's the case, it's a pretty scary realization. Quite likely, we'll be stuck with the IPv4 space we already have.

It will be several years before ITS runs out of IPv4 space, but the end is closer than many people think. No, the sky is not falling, but I firmly believe that we have to start planning now for what to do when we do run out of space. IP addressing will be a serious issue during the University's next strategic planning period (2009-2014).

Several network managers on campus have already begun efforts to conserve IPv4 space. They're doing things like moving networked printers and internal servers to private address space. But there are only so many printers to move! These steps are important in the short-term to preserve business continuity, but ultimately, they only relieve the pressure temporarily. We're still left with the question: What is our long-term solution to the address crunch?

Some within the university have proposed using static NAT for desktops. This approach has issues, including potentially breaking certain applications. The number of apps broken by NAT has declined in recent years, in part due to protocols such as STUN, TURN and ICE, to name just a few. Frequently, NAT-enabled apps will have to support multiple such protocols. Adding NAT traversal support to an application can add considerable development and testing costs (see this paper on Skype for a good example of NAT-enabling a real-world app). Given the aggressive use of NAT today, many developers are already adding NAT support to their apps.

Others are proposing using IPv6. Of course, deploying IPv6 isn't free either. There are both hardware and software costs involved. New routers and firewalls have to be purchased. Of course, network equipment needs to be regularly replaced anyway to keep up with increased user demands. If IPv6 support is introduced as network equipment is replaced, deployment costs can be considerably reduced (but not eliminated). Software may need to be upgraded to use IPv6-safe APIs or to remove hard-coded IPv4-assumptions. There will certainly be testing costs. Fortunately, many programming languages (Python, Ruby, Java) and networking libraries (CFNetwork, Qt, NSPR, APR) are already IPv6-enabled. This support tremendously helps reduce programming costs. Even so, there will certainly be at least a few legacy apps which will be very expensive to convert to IPv6.

Ultimately, I think this is an economic question: In the long-term is it cheaper to (more aggressively) deploy NAT and NAT-enable our apps, or to deploy IPv6 and v6-enable our apps? Any measurement of cost should include the factors outlined above plus opportunity cost.

My vote is for IPv6. Why? Because the graph above scares me. I'm just not comfortable living in a network where we've used up most of our publicly routable addresses. Even with extensive use of NAT, I'm afraid that such an environment limits our ability to deploy new applications and services. Even if Penn State can reclaim significant portions of its IPv4 space, the story doesn't end at our border router. What about our partners? What about network operators in Europe and Asia who are already deploying IPv6? If we need to interoperate with them, we'll need an IPv6 story of our own.

I'm curious what others think. How do you answer the question?
In the early days of the Internet, addresses were assigned in a very wasteful manner. Organizations would frequently get a Class A assignment (16.7 million addresses) regardless of their size. With the introduction of the RIR system in the mid-90s, new allocations are much more appropriately sized.

As we continue to use up IPv4 space, there is increased pressure on the legacy IPv4 class A holders to relinquish their space and switch to a more efficient allocation. But this is a voluntary process -- there doesn't appear to be a legal or policy procedure to force a legacy holder to relinquish their space. So far, there haven't been too many volunteers. Last week, IANA reclaimed the 014/8 legacy block. See RFC 3300 for more on the 014/8 block.

While this is important, I have to point out that this represents less than a 0.5% increase in the amount of available IPv4 space. I still think that IPv6 is the only viable long-term solution to sustainable internet addressing.