Given that the Grid group at MIT seems to be commited to doing real work on real networks (building exactly what we need) maybe the focus of community groups should be on other parts of the problem. The issues, in my mind, are such things as distributed back haul, deployment/installation, and hostile network enviroments where there are nodes that are not functioning within the protocol spec.
Distributed back haul seems to be one of the tougher problems. The trick is how do you allow for a multi-point border without being an autonomous network. My soultion for now is to get some cheap colo and back haul the traffic there using ip tunnels. This has the disadvantage that it does not use the internet efficiently as all traffic will bounce through the colo as it leaves or enters the network. Maybe we could do something smart with ICMP redirects or triangle routing (a la triangle boy).
The deployment issues are getting the hardware in peoples hands, making it easy to install, and knowing the current network topology to know where new nodes can go. Both these issues have had some progress on them. On the hard where side, I know BAWRN has a nice hardware setup [PDF] they are using.
These units do take a bit of construction. What can we do to make them more off-the-shelf? I think this will probably lead to better results then trying to hack on linksys hardware which has a history of changing often, making it hard to offer simple instructions for users to install alternate software. The disadvantage to this approach is its expense. For now, I think we should probably just live with this and wait for prices to drop.
The other half of the problem is the provisioning bit. There is also progress in this as well. Two places where we see this is in the captive portals such as no-cat authentication and no-cat mapping.
On the hostile network envrionment side, I do not know of much published work. I have put some thought in to it, but have not written anything. The answer to this problem may very radically depend on the routing protocol used. This needs some thought as it is not just about people trying to hack the network, but also about bugs in the software (which I am sure we will have). In my opionion, the goal should be to avoid non-local DOS attacks on the network. By non-local I mean ones which can be used across the network as opposed to attacks that only work on the local links, like a radio jammer. I supect this will require the solution to be based on information local to each node that computes itself. One non-local attack would be route poisoning. I think the freenet next generation routing may be a good place for inspiration.
This type of approach could probably be used on top of arbitrary routing protocols by acting as a multi-player on top of the protocols decisions about metrics. I think protocols that pass link state around the network are good from a efficiency standpoint, but not from a robustness standpoint.
A further issue is network scaling. I have some thoughts on this, but my guess is they are totally wrong. Really, I think we will have to incrementally grow the networks and discover what the real traffic patterns and scaling problems are.
Posted by moore at 23.09.03 09:47