Friday, January 28, 2011

Egyptians Are Trying To Tell Us Why Facebook Should Not Replace The Internet

If you could post from Egypt to Facebook today, maybe you'd ask, "When did the Internet break?"
The Internet rose up from DARPAnet to be a network of computers immune to widespread outage (eg. versus a nuclear attack from the Ruskies). It's made to use available pathways until finds it a route from A to B. When things are bad, you have a lot of hops between A and B to circumvent the damage-- to the point where your routing takes forever and maybe gives up. You can block access to specific servers, or increasingly larger blocks of addresses as to try to shut down the message. If an enemy is trying to block access to something that proliferates (like a blog post that gets shared around), it has to try to plug all of these holes to accomplish that. It's difficult. If pernicious enough, it's impossible. Or, least it used to be.
The Internet has been about getting big while satisfying everyone. That's what the long tail about: getting your message to your peeps out there no matter who those peeps are. Along the way, spreading the message was made easier through Twitter and Facebook. Facebook, with 600 million-plus users is almost the Internet itself. People use it to communicate. They blog "Notes" through it. They send around messages. They post pics and videos. They share links. They build communities. All of the world of functionality-- that used to be picked up by a pluralistic bedlam of the World Wide Web-- it's all done by Facebook. You can log onto Facebook and just stay tuned into that one "channel." For an Internet guy, the future seems bleak: Facebook has all the marbles.
People tuned into food security warn about the use of mono-crops. A well cloned potato used throughout Europe led to the Great Potato Famine. Farmers will have to switch up to a different banana strain before the current strain fails altogether. Mono-crops make for a single point of failure. Diversity and pluralism of plants or farm animals is required or else one widespread weakness can be exploited and affect all of the organisms. Think of the goldmine that Facebook presents to hackers: they can get three times more data by raiding Facebook than they can by raiding the IRS database.
Our world is increasingly reliant on the Internet. Anyone who tells you that the world is now networked-- like we've only had wires since 1993-- is naive. Since the days of the telegraph, we've been using a world wide network to carry out business and communication. Telephones, teletypes, fax machines, EDI, etc..-- they've all been used to transmit critical data. The switch to the Internet is the difference: it allows for more dynamic connections and more data and it was more resistant to attack because it didn't hinge on your phone line. Theoretically, the nuke-proof Internet is a better transportation medium. Few countries would consider cutting their national phone lines. Because the Internet conduits business relevant in and around Egypt, Egypt cannot start switch off routers, ISPs and the big "pipe" that runs the Internet into the country. It would be suicide for the government; and a smothering of their business services. Thanks to Facebook and Twitter, they don't need to kill the Internet to cut people off from the Internet. Clip two sites and some of their satelites and you have it squashed.
Twitter has it figured out. Their API is will used by HootSuite and a host of other services to both digest, store and post data to Twitter. They have it right. Facebook is a walled garden: data gets in, but it doesn't get out. People like that, but this clustering of services under the umbrella of relatively few sites has made for a dangerous situation. By the nature of the Internet, shutting down access is like catching smoke. The Web will find a way. A post you put "out there" will be cached by other sites and users. Email will queue up and re-try. Your usenet post would have proliferated. Little of that happens with Facebook. Twenty years of good, durable technical practices have been undone by a bunch hoodied hipsters. I heard an engineer for NowPublic chortle about users comments and he mocked people who complained that their comments didn't appear or were lost. The idea that data can be lost and that's okay is both essential to allow the Internet to grow and toxic to its growth-- like a politician who has to abandon his morals to win an election. It also means that material you put out there can either be stuck inside of a walled garden (Facebook) or inside of a site/system that lives with low-grade Alzheimer's (Twitter) and some favoritism (eg. Julian Assange and WikiLeaks trending gets squelched by unaccountable Twitter). By relying on Facebook and Twitter, users are putting their hopes in two sites. By blocking one site, you can shut down access between your people and six hundred million people, their data and coordination. The accretion of traffic to a handful of sites (this includes Google, eBay and the other popular sites) undoes the brilliance of what the Internet sought to achieve: a network resistant to attacks and robust vs. attempts to cut people off from each other. We've given up durability to get convenience. I hope we "Like" it that way. More on the problem with Facebook and Twitter replacing your Internet.

Monday, January 24, 2011

Here's My Conundrum

I am trying to parse the content in this page, ( to get the list elements and which list they appear in.
The right sided lists are straight forward-- one link per list. It's kind of straight forward. The list on the left side is trickier.

Because the links are buried inside of layers of DIV tags and styling, I can't see a way to compare the blocks beside each other and know if they are part of the same list and isolate each block of code relevant to one specific link. For the bonus round, these stylings may duck and weave-- if they change a little, I need to apply the same logic at the HTML to find out the relevance of the code and where it is in the document.
I want to be able to fish out these elements and end up with an array of items in one list (one area). How can I know how many regions I have?
Anyone have any bright ideas? What to show off your smarts? Want a contract to do this: set up the logic to read a page, find its lists, pull those out and isolate each item in a cell in an array specific to the whole a particular list?

Saturday, January 22, 2011

Categories and Tags in Wordpress

In case you weren't in our WordCamp Victoria class today, here's a run through of our presentation: