Tuesday, September 30, 2008

Reading on "A Comparison of Mechanisms for Improving TCP Performance over Wireless Links"

Summary: this paper uses throughput and goodput as the metrics to compare different mechanisms for improving TCP performance. The authors classify the mechanisms into three classes: end to end improvement, link-layer protocols and split-connections. Details about different mechanisms are discussed. Based on the results of experiments using different mechanisms, the authors gave us some valuable conclusions. Some most important conclusions are as follows:1, the connection-split measure is useless; 2, through adjustment on the TCP protocol itself, the performance of TCP in wireless links will be improved a lot.
Background:some knowledge about TCP is necessary, for instance, fast-recovery, cumulative acknowledgement and etc.


reading on "MACAW"

summary: this paper proposed MACAW-a new MAC layer protocol for wireless networks. To solve the problems met in MACA, MACAW uses lots of solutions including back-off counter copy, per flow FIFO, ACK packet, DS packet and etc. Each of the solutions is used to target one specified problem. In the process of thinking about solutions, the authors put throughput fairness in the first order. This paper left the readers lots of future work. 
background: some basic knowledge about wireless communication
Discussion: First of all, this is a very good paper not because of its idea but because the inspiration it gave us. It tells us a new way to write a paper: list all the problems of the former solution, solve them one by one. If we can't find a perfect solution, we can leave it as an open problem.
1, the discussion about ACK is quite interesting. Retransmission through transport layer is too slow because of the long time-out. Therefore, the authors used link layer recovery. I wanna ask: does this method obey the golden "end to end argument" rule?
2, the simulator created by the authors is really smart which approximates the media by dividing the space into small cubes. we can discuss some issues in wireless simulators?
3, purely using tables not graph to tell the results is not good in my opinion. Why not use graph?  

Sunday, September 28, 2008

Reading on "Scaling Internet Routers Using Optics"

Summary: this paper presents a new router architecture whose switching capacity is 100TB/s. The authors extended Chang's  load-balanced router architecture using optical devices. In order to avoid N*N fibers, the authors suggested to use ASGR which only uses 2N fibers. FOFF(Full Ordered Frames First) is presented to solve the packet mis-sequencing problem. The authors predicted the techniques in three years and discussed some difficult issues to make the the new router practical.
Background: understanding on Chang's load-balanced router architecture;
Discussion: I have lots of problems to understand this paper. I have to admit i don't truly understand the idea of this paper.
1, Is FOFF a good idea to guarantee the packets in sequence? my intuition is that this idea is too simple to let the authors be the first one to have this idea. Meanwhile, even bringing N*N+1 buffers to the linecard is not an issue, how about the delay?
2, I am wandering how to prove the interconnection of M*G*M is equal to N*N. I feel totally lost about the "bit-slicing". What is this? Introduce another smaller 3-stage router per linecard?
3, How about the router's practicality facing on the IPv6? Unbelievable, the author discussed the future router without IPv6?
4, Why let M equals G+L-1 can eliminate the imbalance? Maybe i am too stupid to understand the simplest example?
5, Last but not least, the authors didn't convince me about without central switching fabric configuration,  the router can work correctly. 
I have read this paper throughly twice. So sad to have so many questions.

Sunday, September 21, 2008

Reading on "A Fast Switched Backplane for a Gigabit Switched Router"

Summary:The main contributions of this paper are two algorithms: iSLIP and ESLIP. Meanwhile, the author introduced the history of the router architecture development. In order to increase the capacity of routers, the crossbar technique was invented. The main advantages by crossbar are as following:1, the datagrams can be  transmitted simultaneously;2, connections from line cards to the central switch are point-to-point links. Virtual Output Queueing is used to eliminate the head-of-line blocking. Basically,  head-of-line blocking means that the first packets of several queues contend to go to the same port while the second packets of them may go to the different port but have to wait till the packets ahead them to leave.
Background: high performance router design is a key component for Internet development.
Questions and Discussion:
1, iSLIP is a really amazing idea using two wheels. What is the origin of this idea? i mean the math root.
2, I remember we talked about the deployment of RED in the current routers. Where it is? in the datapath? Why the authors didn't say anything about it? VOQ is very similar with Fair Queue:)
3, Can we discuss a little bit about  the future of router architecture? what will the future router be? 

Thursday, September 18, 2008

Reading on "Supporting Real-time application in an ISPN: Architecture and Mechanism"

summary: This paper proposes a new architecture for ISPN which is based on a novel observation that some real time applications are more flexible and can adapt to current network conditions. The authors described the nature of delay and presented some solutions for dealing with delay. After introducing WFQ for guaranteed traffic and FIFO for predicted service as well as FIFO+ in multi-hop sharing, a unified scheduling algorithm was presented. Service interface and admission control are also simply discussed. 
Some thoughts:
The three authors of this paper are all supermen on research. Just based on this, there should be some fantastic ideas in this paper. but i really don't find some. what i am thinking is that the authors just combined the current techniques(WFQ, FIFO) to solve a new problem(the new observation). And the title also confuses me a little bit. What is the architecture? what is the mechanism? can we call them so?   

Reading on "fundamental design issues for the future internet"

summary:This paper discussed the service model related issues in the future internet in which the multimedia application is thought to be very popular. The author gave us   the goal of the network design which is to maximize the performance of the applications. Based on this goal, the author defined a simple utility function. Through analysis, we are told that multiple service classes will work better than single "best effort" service.  This paper discussed lots of different design options of future internet: "implicitly supplied" versus "Explicitly requested", "overprovision or not", "admission control or not" and etc. 
Some thoughts:after i read the paper, nothing impressed me. but anyway i should have something in my brain.  Any problems:
1, is the utility function too simple? is it really correct to get some conclusion based on that?(although the author said it is nonrigorous)
2, the paper was published in around mid 90. Is there anything in the paper  coming true in our current internet?
3, Facing the emergence of new applications, when is the suitable time to think about changing network design?




Tuesday, September 16, 2008

Reading on "XCP"

This paper presents eXplicit Control Protocol(XCP) which is aimed to solve the problems facing TCP because of the introduce of high speed optical and long delay satellite links to the Internet. XCP generates the ECN(explicit congestion notification) and brings a new concept to decouple the utilization from fairness. The idea itself is quite straightforward. It adds the congestion header to the packets and the routers are able to write some of fields in the packet's congestion header according to the congestion condition. Similar with "core stateless" paper, the authors also used the fluid model. More, they also used control theory to prove their method's stability.
Dina Katabi is a powerful woman. This is  a first paper i read which was written by her as a first author.
XCP is a nice idea but a kind of common cause it was time to think about using interaction to do congestion control after finishing thinking about solution for congestion control both on user side and router side. But good paper needs more theoretical analysis and experiments to verify the correctness and convince the readers. Obviously, Dina's paper has everything.
What I can take away is mainly about "Rethinking congestion control" part. Like Dina said, "our initial objective is to stay back and rethink the congestion control protocol without caring about the backward compatibility or deployment ."

Monday, September 15, 2008

reading on "RED"

This paper proposes a new congestion control mechanism, Random Early Detection(RED), which is basically using the average queue size to control the congestion. The authors compared the average queue size with two thresholds: the minimum and the maximum and took different actions based on the result of comparison between the average queue size and thresholds(less than minimum, none are marked;greater than maximum, every packet is marked; larger than minimum and less than maximum, marked with a probability.) The authors conducted lots of experiments under different network configurations and proved the performance and efficiency of RED. 
Sally Floyd really did a great work in this paper even giving some detailed suggestions and rules to the network operators about how to configure the parameters. He used a bunch of experiments to support and verify all his conclusions. Not just some equations, but some convinced experimental results. 
But what are the problems? 
1, did Sally really know the gains at the time of thinking about the RED idea? Ok. let us talk about the paper a little more. He first gave us an equation about how to compute average queue size. Why use such an equation? Then he gave out how to compute Pa and Pb. Anyway, his experimental results show that RED almost gets everything he wanted.   
2, did Sally conduct sufficient experiments? Absolutely not. He only conducted the experiments under two different topologies. If so, can he say his conclusions are representative? 
Anyway, for me, the most amazing part of the paper is that the RED is really used in the real world. I have to ask how many network research ideas can be applied to the real world? Should we think of some ideas that are just papers? 

"fair queueing" and "core-stateless"

i will write them later. I am too LAZY though.

Tuesday, September 9, 2008

Reading on "Congestion Avoidance and Control"

The authors added lots of features to the 4.3BSD TCP and improved the ability of congestion avoidance. one of the features is slow_start. although the authors are not the first one to have this idea, they did more work on it including developing another calculating way of round-trip timing.
This is a very good paper. 

Reading on"analysis of the increase and decrease algorithms for congestion avoidance in computer networks"

This paper presents the additive increase and multiplicative decrease algorithms is the best way to do congestion avoidance based on some metrics of fairness, distributedness and so on. The authors confirmed their conclusion through analysis on the simple model which only contains two users. In the end of the paper, the authors gives us a lot of future work. 
I do think it is a very good paper. It uses simple math knowledge to infer useful conclusion. 

Thursday, September 4, 2008

Lixin Gao- On inferring atutomous system relationships in the Internet

The paper first presents the three relationship between ASes: provider-customer, peering-to-peering and sibling-to-sibling. Then, it develops the concept of “valley free” and the principles that AS paths should conform to. The author designs three algorithms to infer the relationships between ASes. The results of the algorithms are verified partly by AT&T internal data.
Recently there are still some papers working on the topic of the AS relationship inference. Is it true that we can’t improve the accuracy solely based on the BGP data entries like what Gao said?
I wanted to read this paper long long ago but till today I finish reading it. This is a very good paper. Simple ideas, good writing.

Wednesday, September 3, 2008

Hari-Interdomain Internet Routing

Current Internet is composed by tens of thousands of autonomous systems. An autonomous system is owned and administered by a single commercial entity. This paper describes the de-facto inter-domain routing protocol – BGP. It also tells us money is an issue for inter-domain routing. It makes clear different relationships between ASes. It explains why iBGP is used in intra-domain and how it works.
Why is BGP scalable? Is there any way to find a protocol to take the place of iBGP?

Monday, September 1, 2008

Reading on"The Design Philosophy of the DARPA Internet Protocols"

This paper discussed some designing objects and principles for the Internet in 70s and captured some of the early reasoning which shaped the Internet protocols. The author deeply discussed the reasons for three goals about survivability, types of service and varieties of networks. The survivability is the first objective of Internet design. The reason for splitting TCP and IP is because of the requirement for supporting diversity of services.
But till now in the Inter-domain routing, survivability is still challenging. Link failure or node failure will lead to transient interruption of communication of two ASes. So, I do think inter-domain multipath routing needs to be paid more attention.
How many questions do we have about the original Internet? we can all find the answers in this paper.
I also found something common between this paper and "end to end" paper about the voice application.

Reading on "End-to-end arguments in system design"

This paper argues that it is a must to put function implementation to the higher layer that is near application. The authors claim implementing the function at low layer may be useless and discuss the end-to-end argument in the context of the communication networks.
The first two sentences of the paper explain the meaning of the paper title. “This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.” The authors prove the correctness of the principles through several examples.
This paper must be the “ancestor” of the host-to-host protocols like TCP.
Actually, this paper talked about nothing but some examples in which the end-to-end argument is applied to. The authors’ opinion to solve all the threats of “careful file transfer” is to do everything at the last minute because there is more or less something that has to be done at the last minute.
The authors somehow state that putting some functions at the lower layer will be efficient but may cost more and also say that implementing functions at the higher layer will be more efficient because higher layer has more system information. A contradictory opinion.
Are we convinced to use the end-to-end argument by the authors? Through the last part of the paper, we are told that the end-to-end argument comes from a bunch of historical projects.