A while back I did the second part of CISCO Netork Academy Instructor training together with my colleagues here at HRC. One of the subjects we covered was Frame Relay which is a technology that is often used to send much of our Internet data from one place to another when those places are a long way from each other. Compared with Ethernet, which is used often for similar purposes when the places are very close, Frame Relay seemed like triumph of fairness from which lessons might be learned even by those of us who do not, consiously, use computer networks in our day-to-day business. My reasoning worked like this:
Frame Relay is used to send network data which has been chopped up into chunks that we can cal frames. Each frame contains a certain amount of data. The network company makes a commitment to its customer to relay this data at a certain guaranteed rate which means that a certain minimum number of frames will be transferred in a given time. This is what the customer pays for.
Consider your own computer’s use of the network when you opened up this page. First there was a lot of activity while your web browser sent messages to bitterjug.com asking for the latest blog entries and then more as those entries were delivered. Then there was a time of inactivity while you read the first paragraph of this entry and realised you were in for another corking slab of Bitterjug wit and wisdom; so you clicked on the link to see the rest of this entry. Then there was another busy time while this page downloaded and since then you have been dilligently reading and your computer hasn’t been using the network so much.
This is all assuming you’re not one of my students who open 26 browser windows at once so as to maximize their limited free browsing time, but more of this later.
The network is, however, capable of transferring data much faster than that. Computers sending data to one another tend to do so in an uneven way: relatively long periods of not much going on interrupted, now and then, by periods when they want to send an awful lot in a great hurry. Frame Relay technology is designed to capitalise on this bursty kind of network use to offer the best service at the least cost.
Costs are reduced by sharing the network links: it would be wasteful for each customer to have a dedicated link since they only use it in bursts, the rest of the time it stands idle. In Frame Relay, other customers are allowed to share a link on the basis that, for most of the time, one will be sending while the other is idle. Furthermore, since the network can transmit data much faster than the rate that the customer is paying for, when the busy time comes each customer’s data can be sent at (or, at least, closer to) the speed they’d like it to go, giving the impression of a faster connection than they one they’ve paid for. In fact, if some customers under-use their connections, the others who use them a lot can enjoy the extra data transfer capacity as a bonus, up to a maximum set by the network company.
Recall that the customer is entitled to transmit a certain minimum number of data frames in a given timeslot. The data frames sent by a customer in each timeslot are counted; those below the minimum number are marked “guaranteed delivery“, those above are marked “this may be discarded if things get too busy“. When things do get too busy, the data frames end up sitting in queues waiting to be sent down links in the network. If those queues get too long, the “may be discarded” frames are discarded with the result that everybody’s “guaranteed delivery” frames go through the link nice and quickly since they don’t have to spend too much time queueing.
The discarded frames don’t cause a big problem because there are other mechanisms at work in the network that detect that data is missing and request to have it sent again. This retransmission, of course, makes the network seem slower but, unless the network company commits to a lot more guaranteed data transfer speed than it has available, it doesn’t happen too often and is offset by the extra network capacity available during those busy times.
So far so good, but what happens when two or more customers have their busy times at the same time? This is the clever bit: they both get their share. Each customer gets the minimum information rate that the network company agreed to.
In the early days of Ethernet, all the computers were connected using the same piece of wire. If more than one computer tried to transmit something at the same time the two transmissions would interfere with each other and the content of both would be destroyed in a network traffic accident known as a collision. When this happened the computers would each roll an electronic dice to decide how long to wait before trying to transmit the same message again. Whoever rolled the lowest number would try again first.
To reduce the number of these collisions, a computer “listens” for other signals on the wire before starting to transmit something itself. If someone else is speaking, the computer patiently waits until they are through before going ahead. This doesn’t rule out collisions all together: they can still occur if two computers decide, at the same moment, that they want to speak. Both will listen and hear nothing and then they both will start to gabble over each other. When this happens, every computer on the network has to wait for them to realise and then back off and let someone else have a go. This wastes valuable time when someone else could be transmitting something, and in the meantime neither of the two messages that collided have been sent.
It is the time wasted in collisions that makes Ethernet become less efficient as it gets more busy. Ethernet can never make the full transmission capacity of its wires available to the computers it serves because transmissions occur more and more frequently, so collisions also become more frequent and more and more time is wasted instead of being used for communication.
The reason I found Frame Relay to be so fine and dandy is because I am accustomed to dealing with Ethernet. Ethernet sort-of shares out the available data transfer resource according to who’s using it most; in fact it is a system of contention: the more you ask for, the more you get — up to a point. The problem with Ethernet is that when everyone is asking for a lot, there is, in fact, less to go round. The reasons for this have to do with the way in which many computers get access to the shared communication medium.
The fact is that network communication capacity (also known as Bandwidth) is a finite resource in any network, and has to be shared out, somehow, among the users. The upshot is that when students at the college each open up 26 browser windows at once to maximize the use of their precious 80 minute long ‘lunch-hour browsing’, they actually get access to less bandwidth than they would if they used only one browser window each. No wonder customers in the cyber cafe complain that “the Internet is slow” if they are unfortunate enough to be on line at the same time as my students.
First I want to check my email. Yahoo mail is very popular in Kenya because it comes up faster than Hotmail (I expect Gmail will become popular here for the same reason since it doesn’t blat its users with Flash ads). But lo!, the Internet is slow! What to do while I wait for the Yahoo log-in page to load? I know, I’ll open anothe window and go to Christian Singles dot com and see if any boys, cute or otherwise, have replied to my ad. But this is taking ages to come up and I’m getting bored. My neighbour is looking at photographs of sexy American gangsta’ rappers; I want some of that!…
Tregedy The Of The Commons is the title of an article by Garrett Hardin published in Science in 1968. Hardin talks about the problem of overpopulation and describes it as one to hich there is no technical solution: there is no clever technological trick we can pull that will enable the population to continue to grow indefinitely.
He appeals to the notion of the British commons: areas of pasture, open to all. When a herdsman considers cost vs benefit of adding another animal to his herd he’ll realise that he’ll benefit from the value of a whole extra animal but suffer only a small share of the cost of extra grazing because that cost is shared among all herdsmen who use the commons. The tragedy sets in as all herdsmen continue to think the same way, introducing more and more animals until over-grazing destroys the commons and starvation results.
Hardin explains that he uses the word Tragedy in the same way that the philosopher Whitehead used it in a 1948 publication Science and the Modern World. Whitehead reportedly said
The essence of dramatic tragedy is not unhappiness. It resides in the solemnity of the remorseless working of things […] This inevitableness of destiny can only be illustrated in terms of human life by incidents which in fact involve unhappiness. For it is only by them that the futility of escape can be made evident in the drama.
Wikipedia is a fantastic on-line Encyclopedia that is written by a voluntary community within its users for the purpose of sharing knowledge.
On reflection, I’m not supprised that Wikipedia gives so much attention to the principles surrounding the Tragedy Of The Commons, because the Wikipedia project is closely tied to the idea of a Creative Commons; more of this later.
I can understand how it happens, and I don’t expect it to change. I was reminded of The Tragedy Of The Commons by Garrett Hardin. Searching for it I discovered a wealth of relevant information on Wikipedia and suddenly realised I was reading about economics and political science rather than computer networking. In both, it seems, there is need for a system of regulation (like, perhaps, that of Frame Relay) to avoid destructive over-exploitation. An Ethernet local area network is clearly a commons and the situation during lunch-hour browsing here at the college is definitely tragic, in the sense that Hardin meant it. I wonder if all unregulated commons are thus doomed to ruination.
Creative Commons provide licences under which artists and other creative people can distribute their work and have some of their rights protected. For example, you can publish a song or photograph on The Web, with a licence that allows anyone to use it in their own derived works but which ensures that you will be credited as the original creator. The licences are mix-n-match in a very simple but clever way and you really ought to check out their web site and tell your friends about it.
These licences are inspired by the Gnu General Public Licence created by Richard Stallman to enable him to distribute the computer programs he writes without them ever becoming part of a commercial product. This significance of this document is frequently underestimated; without it there would be no Wikipedia, no Linux, no Mozilla, probably the Web would have taken longer to take off and would probably have a much larger degree of commercial control, and the programs I am using to write and publish this page would not exist or, if they did, would not be freely available.
My friend Paula works for the Creative Commons, an organisation that seeks to promote sharing of creative works through a system of legal licences that are more flexible than traditional copyright but offer more protection than the public domain. The reason, according to Wikipedia, that utilising a commons leads to tragedy is that each individual’s decision to use the commons has an associated negative impact on the other users. These negative effects are called as externalities. In the Creative Commons, however, creative works are made available via digital media which can be copied freely and repeatedly without negative externalities. This is the very issue that is currently causing headaches for the music and video industries, but for the Creative Commons or, more pertinently, for creative people around the world who want to be able to collaborate freely and to share their work, it is a great strength.
The computer networks on which The Internet is built, and their data transmission capacity, form a kind of Commons. Either that capacity must be regulated and shared under the (private) control of a network company, as is the case with Frame Relay, or it may be subject to the tragic ravages of contention systems like Ethernet. The Creative Commons, and projects like Wikipedia, can exist because, thanks to computer technology and that same Internet, it is possible to copy and distribute digital information without costly externalities. They are an information commons, not a material one. So long as there exists a free and fare Internet, their story should be less of a tragedy and more of a comedy.
Comments are closed.