February 12, 2007 (Computerworld) -- Advances in IT over the decades have come mostly in small increments — Release 2.3 yields to 2.4, transistors shrink a few more nanometers, Ethernet gets another speed boost, bugs are fixed, and algorithms get tweaked. That kind of evolutionary approach has served users well, boosting speeds, capacities and application capabilities by many orders of magnitude.
But such incremental improvements are no longer sufficient to keep the Internet viable, according to a growing number of researchers. In fact, they say, the Internet is at the tipping point of overwhelming abuse and complexity.
The most sanguine of observers say that even if the Internet is able to avoid some kind of digital Armageddon brought on by spammers, hackers, phishers and cyberterrorists, it nevertheless will drown in a flood of mobile gadgets, interactive multimedia applications and Internet-enabled devices, including phones, cars, home appliances and radio frequency identification tags.
...
Nick McKeown, a computer scientist at Stanford University, heads up one such program. He says the Internet is “broken” in at least two places — security and mobility.
“Ten years ago, we were saying the Internet would change the world,” he says. “In a decade or two, we’d be doing air traffic control and remote surgery over the Internet. But if air traffic control were on the Internet today, I wouldn’t fly.
And it isn’t just a problem of security and reliability, McKeown says; the Internet is getting crushed by complexity. He points out that the original Internet design was based on the idea that users were immobile and connected to the Net by wires.
“But if the user is moving around, you end up with a whole lot of hooks and kludges to keep track of the user,” he says. “There have been various proposals for a mobile IP, and they are all awful. They barely hold together now, but all the routing mechanisms will just break when there are many more mobile devices.”
McKeown and his colleagues have developed a prototype network called Ethane, which centralizes security rather than putting it all around the network in firewalls, virus scanners and the like. With Ethane, all communications are turned off by default. A host joining the network must get explicit permission from a centralized server before it can connect to anything except that server. And the server won’t grant permission unless it is able to determine the location and identity of the requestor.
The National Science Foundation has funded Internet research for many years, but most of its projects have been of the incremental improvement variety, and most have not involved proving out new ideas on a large scale, with millions of users, says Deborah Crawford, deputy assistant director for computer and information science and engineering at the NSF.
But now the NSF is gearing up to build a $300 million to $400 million clean slate on which researchers can chalk up and test radical new ideas. The Global Environment for Networking Innovation, or GENI, will be a giant test laboratory stretching across the U.S., complete with wired and wireless computers, routers, switches, management software and subnets of wireless, cellular, sensor and radio devices. It will include a fiber-optic backbone and tail circuits to some 200 universities.
When it’s complete, sometime after 2010, users will contract for virtual “slices” of the GENI infrastructure, which they’ll be able to use to test ideas “at scale,” simultaneously and independently, Crawford says.
GENI will be built without assuming anything, says Allison Mankin, a co-manager of GENI at the NSF. The hardware and software will have the flexibility to accommodate the trial of just about any networking idea, not just those based on packet switching, TCP/IP, routers and other accoutrements of today’s Internet.
Mankin says the kinds of incremental progress that have typically come from earlier NSF projects, while worthwhile, are no longer sufficient. “For example,” she says, “people have actually proven that it’s impossible to prevent denial-of-service attacks with the current Internet. If you want to build a network without denial of service, you have to start over.”
But Mankin acknowledges that it’s unlikely the old slate will be wiped clean completely, with the global Internet scrapped for something entirely different. “Forklifts don’t exist that are big enough for that,” she notes.
------------------------------------------------
There is an old saying that states - "Nature will always find a way..."
I prefer the saying - "Hackers will always find a way..."
In this case, I mean all shades of hackerdom. Blackhats, whitehats, greyhats, nohats.
While I believe this project is great for academic large-scale network research, I fear that this is an attempt to remove one of the strengths of the internet as we know it....anonymity.
Applying NAC type controls to the internet is positive for security, but how will it affect the privacy of the general public that has this system forced upon it.
In the end, this type of network will raise the bar for security but it won't create a marble ceiling against evil.
No comments:
Post a Comment