Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Network Education The Internet

The Ambitions and Challenges of Mesh Networks and the Local Internet Movement 56

Lashdots writes: Two artists in New York are hatching a plan to teach kids about the internet by building their own. They'll be creating a small, decentralized network, similar to a mesh network, to access other computers, and they'll be developing their own simple social network to communicate with other people. It's part of a growing movement to supplement the Internet with resilient, local alternatives. "And yet, while the decentralized, ad hoc network architecture appeals philosophically to tech-savvy users fed up with monopolistic ISPs, nobody’s found a way to make mesh networks work easily and efficiently enough to replace home Internet connections. Built more for resiliency than for speed, each participating router must continuously search for the best paths to far-flung machines. For now, that makes them of limited interest to many ordinary consumers who simply want to check their email and watch movies."
This discussion has been archived. No new comments can be posted.

The Ambitions and Challenges of Mesh Networks and the Local Internet Movement

Comments Filter:
  • Intractable issue (Score:5, Insightful)

    by phantomfive ( 622387 ) on Tuesday May 05, 2015 @11:19PM (#49626499) Journal
    The most intractable issue, even once the routing problem is solved, is that huge amounts of traffic are all going to a few places, and those places require a lot of bandwidth. For example, it would really suck to live next to Google's data centers, or even Slashdot's data centers, because a lot of traffic would be going through your wifi to get to Google.

    IF traffic were spread evenly across the network, there wouldn't be a problem, but it's not. So you kind of need a backbone of some sort. (maybe someone solved this? Solution is unknown to me, though)
    • More subscribers means more bandwidth, so you locate the servers in a distributed fashion, near where the users are. This is already the trend, but it would be moreso.

      • by raymorris ( 2726007 ) on Wednesday May 06, 2015 @01:01AM (#49626837) Journal

        Of the just over 1 billion web sites currently online, fewer than 0.000001% have more than 3 servers per CONTINENT. To have a server in each province / state would increase the costs several thousandfold.

        There are about ten web sites in the world that could actually have servers in thousands of locations without going bankrupt.

        There is a reason your neighborhood street that you live on isn't 2,000 miles long. It connects to a minor collector (street with several stop signs), which then connects to a major collector (street with a few stop signs), which then connects to an arterial (street with stop lights), which connects to a major arterial (three or more lanes each way), which then connects to a freeway, which then connects to an interstate. Streets are laid out like that because a hierarchy of larger and larger paths is the only halfway efficient way to move stuff from any house in the country to any other house. That's just as true with digital stuff - it only works when you put fat fiber under the rivers, through the deserts, and over the mountains.

        Which means someone has to decide where to spend $20 million on the next chunk of backbone, and someone has to fork over $20 million and hope that it's the right technology, in the right place, at the right time, and implemented properly.

        • by Nethead ( 1563 )

          So you're saying that come the Zombie Apocalypse I won't be able to order kitty litter from Amazon?

        • There are about ten web sites in the world that could actually have servers in thousands of locations without going bankrupt.

          You're hilarious. You don't even get how this works. You just use data centers located in population centers like always. In those population centers, there are more subscribers, so there is more available bandwidth.

          We may need formal links between population centers. Just like roads, these would reasonably be public infrastructure.

          Meanwhile, only CDNs really need to be hosted in these locations, so some websites' architectures will change slightly with the heavy content hosted by third parties and the rest

          • It's pretty obvious you've never so much as been in a datacenter, not have any idea how CDNs work (or more _fail_ to work, because few pay any attention to the http spec on proxies).

            Some of us actually build this shit and know how it works.

            • Hahaha fail to work. Sure, CDNs fail all the time, but they are also used all the time, and that use is only becoming more prevalent. You're going to have to figure out how they work eventually, if you want to keep working.

        • by adolf ( 21054 )

          There are about ten web sites in the world that could actually have servers in thousands of locations without going bankrupt.

          You don't need a server. You need a COTS router running OpenWRT and OpenVPN (with hardware acceleration), a couple of well-placed antennas, and a commercial- (not carrier-) grade symmetric DSL, cable, or wireless connection.

          In other words: You don't need a million spinning-disks server with its own abilities to serve content, you need a a million low-power NAPs with a gateway to you

          • [quote]
            You don't need a server. You need a COTS router running OpenWRT and OpenVPN (with hardware acceleration), a couple of well-placed antennas, and a commercial- (not carrier-) grade symmetric DSL, cable, or wireless connection.

            In other words: You don't need a million spinning-disks server with its own abilities to serve content, you need a a million low-power NAPs with a gateway to your own content.
            How much traffic does google.com see from my small Ohio town of ~45k citizens? Answer: Not enough to swamp

            • by adolf ( 21054 )

              No.

              I'm suggesting that it route.

              Nowhere did I suggest that Google not have their own (hard-wired, or otherwise out-of-band) connection to that router; indeed, I expect that they would. They've already got server farms; all they need are geographically-diverse mesh nodes.

              And you're making the logical error that others seem to be making: That every purpose in having any network is to get free and fast access to the greater Internet, and anything that fails at this promise is utterly useless.

              Following this m

      • It's already distributed, but living next to a data center is going to be a lousy because everyone will want to use your bandwidth....even if the data center is small
    • by Darinbob ( 1142669 ) on Wednesday May 06, 2015 @12:50AM (#49626795)

      I've been doing mesh stuff for over a decade, though I'm not the expert in it. This is not easy stuff. There's some of it that might work in this case though: assume everyone is near enough to each other for good connectivity, and waste power and bandwidth because you're constantly reevaluating your routes but that's ok because these are probably constantly powered laptops. Ie, a dorm room.

      But it's not going to work well for longer and less reliable links. They'll need to do the sorts of things that wifi doesn't do (I'm assuming wifi because they don't sound like the people to design their own radios). Then there will be the mess of optimizing their network so someone isn't stuck with horrid latency because of all the hops necessary to reach them. Line of sight issues are messy and need optimization too, probably need repeater or bridge nodes. If the nodes are mobile then the constant updating of routing tables wil screw things up as you move from one internet bridge to another. Maybe better if you have immobile wifi hotspots which are then connected to a mesh, an idea that's been around awhile.

    • by adolf ( 21054 ) <flodadolf@gmail.com> on Wednesday May 06, 2015 @01:36AM (#49626929) Journal

      You're making the (perhaps flawed) assumption that the purpose of such a mesh network is to access the greater Internet.

      If I want Internet access, I'll just pay for it: Basic and relatively slow (or relatively fast, depending on point of view) always-on ISP service is cheaper than it ever has been.

      If I want mesh network access, I'll just build a node and find some folks to peer with.

      If I can't get to the Internet from the mesh, and can't get to the mesh from the Internet, I'm OK with that.

      If Google elects to organize a mesh's data on their behalf, then they can co-locate on that mesh. If this results in poorer performance than they expect, they can add more geographically-diverse nodes of their own until they meet demand.

      If someone wants to monetize or give away a path to interconnect the meshes to eachother or any other network (including the greater Internet), they do so on their own accord.

      • You're making the (perhaps flawed) assumption that the purpose of such a mesh network is to access the greater Internet.

        The summary kind of implies that people want to use a mesh to connect to the greater Internet.
        After reading your post, I'm not really sure what other use you have for a mesh network, other than to connect to it.

        • by adolf ( 21054 )

          After reading your post, I'm not really sure what other use you have for an Internet, other than to connect to it.

          (Also: 1994 called. They want their Luddite back.)

          • After reading your post, I'm not really sure what other use you have for an Internet, other than to connect to it.

            (Also: 1994 called. They want their Luddite back.)

            Yeah, regurgitating a stale meme rather than providing an answer really proves how vital the internet is. Good work.

          • Mainly, I'm interested in finding what's on other people's servers.

            Though to be fair, the vast majority of those servers are crap. If I could get the 1995 internet back, I would take it.
            • The Internet in 1995 was a special place, indeed.

              Personally I see a small mesh as a potential cross between what both the Internet and the local BBS scene used to be.

              What this might be useful for is in the eye of the beholder.

    • I can see one potential solution: A content-addressible distributed store. No-one has ever designed a suitable protocol because there is the usual chicken-and-egg problem, plus ISPs would be weary of creating the greatest tool for piracy since Usenet.

      • Caching doesn't work for everything
        • True. But it works for a lot of things - including most of the really big things, like images, video and archives. If you were to divide every transfer on the internet into 'potentially cachable' and 'dynamic' you'd find just about every file over a megabyte is in the first set. A content-addressible caching system would greatly reduce the load on the network by removing most of the big downloads, freeing up precious capacity for the non-cachable things. You can include a fallback to the conventional downlo

      • ...of which USENET was a distributed component?

        I sometimes wonder if/when it will get sort of re-deployed with a focus on secure communication and secure content distribution.

        • Very nearly. The big difference I see would be addressing requests via hash. That means it's just about impossible for a rogue node to break anything, either deliberately or as part of an attack - if the client gets anything other than what it requested, the hash doesn't match.

          Usenet with security makes Freenet - but Freenet is heavily focused on a paranoid level of resistance to monitoring which seriously impairs performance.

    • have every mesh node have a rate limit which automatically flags up to nearby nodes that it is being approached and so the other nearby nodes have to route around that mesh node.
    • by LWATCDR ( 28044 )

      You left out
      1. Wireless will never have the bandwidth of fiber.
      2. They will be limited to very close to line of sight. Sucks to have national park, state park or even a large farm in the way.

      "IF traffic were spread evenly across the network, there wouldn't be a problem, but it's not. So you kind of need a backbone of some sort."
      In theory caching would work but that would have issues with syncing.

  • I looked into this years ago from the physical layer support for full duplex and half duplex nodes (this was fun since I am a hardware guy) all the way up to designing a node addressing scheme which both helped with routing and allowed dynamic adding and deleting nodes (not so much fun but an interesting problem). The largest problem I found was scaling which would have required tunnels (wormholes) to high traffic endpoints or to shunt traffic around congested areas. Discouraging free riders was handled w

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...