INTERNETWORKING WITH TCP IP VOLUME 2 PDF

adminComment(0)

Internetworking With TCP-IP Vol 2 2ed Design, Implementation, And Internals - Ebook download as PDF File .pdf), Text File .txt) or read book online. TCP/IP protocol stack in approximately the same order as Volume I. the list of procedures . Appendix 2 Glossary Of Internetworking Terms And Abbreviations. Bibliography .. Internetworking With TCP/IP Volume Client-Server Programming. 2: Design, implementation, and internals: [ANSI C version] — 3. ed (English) / attachments/dandelon/ids/leccetelira.ml Internetworking with TCP-IP Volume II: Design, Implementation, and Internals.


Internetworking With Tcp Ip Volume 2 Pdf

Author:THERON LAFORREST
Language:English, Dutch, Hindi
Country:Azerbaijan
Genre:Children & Youth
Pages:476
Published (Last):23.02.2016
ISBN:277-6-38918-912-3
ePub File Size:24.50 MB
PDF File Size:11.12 MB
Distribution:Free* [*Register to download]
Downloads:47817
Uploaded by: AYANNA

2 The Client Server Model And Software Design. In addition, most large corporations have chosen TCP/IP protocols for their private corporate internets. Jan 1, first publication of Douglas Comer's book series Internetworking. With TCP/IP. TCP/IP protocol suite in January , and by NTP was in operation on the TCP/IP Volume 2: Design, Implementation, and Internals. technology underlying the TCP/IP Internet protocol suite and the architecture of an internet. Internetworking With TCP/IP vol 1 -- Part 1. 2.

Note the distinction: all packets below nr have been received, no packets above ns have been received, and between nr and ns, some packets have been received.

When the receiver receives a packet, it updates its variables appropriately and transmits an acknowledgment with the new nr. The transmitter keeps track of the highest acknowledgment it has received na. The transmitter knows that all packets up to, but not including na have been received, but is uncertain about packets between na and ns; i. Transmitter operation[ edit ] Whenever the transmitter has data to send, it may transmit up to wt Packets ahead of the latest acknowledgment na.

In the absence of a communication error, the transmitter soon receives an acknowledgment for all the packets it has sent, leaving na equal to nt. If this does not happen after a reasonable delay, the transmitter must retransmit the packets between na and nt. Techniques for defining "reasonable delay" can be extremely elaborate, but they only affect efficiency; the basic reliability of the sliding window protocol does not depend on the details.

If it falls within the window, the receiver accepts it. If it is numbered nr, the receive sequence number is increased by 1, and possibly more if further consecutive packets were previously received and stored. If the packet's number is not within the receive window, the receiver discards it and does not modify nr or ns. Whether the packet was accepted or not, the receiver transmits an acknowledgment containing the current nr.

The acknowledgment may also include information about additional packets received between nr or ns, but that only helps efficiency. N is usually a power of 2. For example, the transmitter will only receive acknowledgments in the range na to nt, inclusive. A stronger constraint is imposed by the receiver. The operation of the protocol depends on the receiver being able to reliably distinguish new packets which should be accepted and processed from retransmissions of old packets which should be discarded, and the last acknowledgment retransmitted.

This can be done given knowledge of the transmitter's window size. Thus, there are 2wt different sequence numbers that the receiver can receive at any one time. However, the actual limit is lower.

Internetworking With TCP-IP Vol 2 2ed Design, Implementation, And Internals

In either case, the receiver ignores the packet except to retransmit an acknowledgment. Examples[ edit ] The simplest sliding window: stop-and-wait[ edit ] Although commonly distinguished from the sliding-window protocol, the stop-and-wait ARQ protocol is actually the simplest possible implementation of it.

The transmit window is 1 packet, and the receive window is 1 packet. Ambiguity example[ edit ] The transmitter alternately sends packets marked "odd" and "even". The acknowledgments likewise say "odd" and "even". Suppose that the transmitter, having sent an odd packet, did not wait for an odd acknowledgment, and instead immediately sent the following even packet. It might then receive an acknowledgment saying "expecting an odd packet next".

This would leave the transmitter in a quandary: has the receiver received both of the packets, or neither? The receiver refuses to accept any packet but the next one in sequence. If a packet is lost in transit, following packets are ignored until the missing packet is retransmitted, a minimum loss of one round trip time.

For this reason, it is inefficient on links that suffer frequent packet loss. Ambiguity example[ edit ] Suppose that we are using a 3-bit sequence number, such as is typical for HDLC. This is because, after transmitting 7 packets, there are 8 possible results: Anywhere from 0 to 7 packets could have been received successfully.

Each time a client application executes, it contacts a server, sends a request, and awaits a response. When the response arrives, the client continues processing. Clients are often easier to build than servers, and usually require no special system privileges to operate. By comparison, a server is any program1 that waits for incoming communication requests from a client. The server receives a client's request, performs the necessary computation, and returns the result to the client.

Because a server executes with special system privilege, care must be taken to ensure that it does not inadvertently pass privileges on to the clients that use it.

For example, a file server that operates as a privileged program must contain code to check whether a given file can be accessed by a given client. The server cannot rely on the usual operating system checks because its privileged status overrides them.

Servers must contain code that handles the issues of: As we will see in later chapters, servers that perform intense computation or handle large volumes of data operate more efficiently if they handle requests concurrently. The combination of special privileges and concurrent operation usually makes servers more difficult to design and implement than clients. Later chapters provide many examples that illustrate the differences between clients and servers. Nonstandard Client Software Chapter I describes two broad classes of client application programs: The distinction between standard services and others is only important when communicating outside the local environment.

Within a given environment, system administrators usually arrange to define service names in such a way that users cannot distinguish between local and standard services.

Programmers who build network applications that will be used at other sites must understand the distinction, however, and must be careful to avoid depending on services that are only available locally. Customized, nonstandard applications range from simple to complex, and include such diverse services as image transmission and video teleconferencing, 1 Technically, a server is a program and not a piece of hardware.

However, computer users frequently mis apply the term to the computer responsible for running a particular server program. For example, they might say, "That computer is our file server," when they mean, "That computer runs our file server program. In particular, some client software allows the user to specify both the remote machine on which a server operates and the protocol port number at which the server is listening. For example, Chapter I shows how standard application client software can use the TELNET protocol to access services other than the conventional TELNET remote terminal service, as long as the program allows the user to specify a destination protocol port as well as a remote machine.

Conceptually, software that allows a user to specify a protocol port number has more input parameters than other software, so we use the term fully parameterized client to describe it.

To specify only a remote machine, the user supplies the name of the remote machine: To specify both a remote machine and a port on that machine, the user specifies both the machine name and the port number: Of course, when building client software, full parameterization is recommended. When designing client application software, include parameters that allow the user to fully specify the destination machine and destination protocol port number. Full parameterization is especially useful when testing a new client or server because it allows testing to proceed independent of the existing software already in use.

For example, a programmer can build a TELNET client and server pair, invoke them using nonstandard protocol ports, and proceed to test the software without disturbing standard services. Connection-Oriented Servers When programmers design client-server software, they must choose between two types of interaction: If the client and server communicate using UDP, the interaction is connectionless; if they use TCP, the interaction is connection-oriented.

From the application programmer's point of view, the distinction between connectionless and connection-oriented interactions is critical because it determines the level of reliability that the underlying system provides. TCP provides all the reliability needed to communicate across an internet. It verifies that data arrives, and automatically retransmits segments that do not.

It computes a checksum over the data to guarantee that it is not corrupted during transmission. It uses sequence numbers to ensure that the data arrives in order, and automatically eliminates duplicate packets. Finally, TCP informs both the client and server if the underlying network becomes inoperable for any reason. By contrast, clients and servers that use UDP do not have any guarantees about reliable delivery. When a client sends a request, the request may be lost, duplicated, delayed, or delivered out of order.

Similarly, a response the server sends back to a client may be lost, duplicated, delayed, or delivered out of order. UDP can be deceiving because it provides best effort delivery. UDP does not introduce errors - it merely depends on the underlying IP internet to deliver packets.

IP, in turn, depends on the underlying hardware networks and intermediate gateways.

Sliding window protocol

From a programmer's point of view, the consequence of using UDP is that it works well if the underlying internet works well. For example, UDP works well in a local environment because reliability errors seldom occur in a local environment. Errors usually arise only when communication spans a wide area internet. Programmers sometimes make the mistake of choosing connectionless transport i. Because a local area network seldom or never delays packets, drops them, or delivers them out of order, the application software appears to work well.

However, if the same software is used across a wide area internet, it may fail or produce incorrect results. Beginners, as well as most experienced professionals, prefer to use the connection oriented style of interaction. A connection- oriented protocol makes programming simpler, and relieves the programmer of the responsibility to detect and correct errors.

In fact, adding reliability to a connectionless internet message protocol like UDP is a nontrivial undertaking that usually requires considerable experience with protocol design.

Usually, application programs only use UDP if: We can summarize: When designing client-server applications, beginners are strongly advised to use TCP because it provides reliable, connection-oriented communication. Programs only use UDP if the application protocol handles reliability, the application requires hardware broadcast or multicast, or the application cannot tolerate virtual circuit overhead.

Stateful Servers Information that a server maintains about the status of ongoing interactions with clients is called state information. Servers that do not keep any state information are called stateless servers; others are called stateful servers. The desire for efficiency motivates designers to keep state information in servers.

Keeping a small amount of information in a server can reduce the size of messages that the client and server exchange, and can allow the server to respond to requests quickly. Essentially, state information allows a server to remember what the client requested previously and to compute an incremental response as each new request arrives.

By contrast, the motivation for statelessness lies in protocol reliability: If the server uses incorrect state information when computing a response, it may respond incorrectly. Consider a file server that allows clients to remotely access information kept in the files on a local disk.

The server operates as an application program. It waits for a client to contact it over the network. The client sends one of two request types. It either sends a request to extract data from a specified file or a request to store data in a specified file. The server performs the requested operation and replies to the client.

Each message from a client that requests the server to extract data from a file must specify the complete file name the name could be quite lengthy , a position in the file from which the data should be extracted, and the number of bytes to extract. Similarly, each message that requests the server to store data in a file must specify the complete file name, a position in the file at which the data should be stored, and the data to store. On the other hand, if the file server maintains state information for its clients, it can eliminate the need to pass file names in each message.

The server maintains a table that holds state information about the file currently being accessed. Figure 2. When a client first opens a file, the server adds an entry to its state table that contains the name of the file, a handle a small integer used to identify the file , and a current position in the file initially zero. The server then sends the handle back to the client for use in subsequent requests. Whenever the client wants to extract additional data from the file, it sends a small message that includes the handle.

The server uses the handle to look up the file name and current file position in its state table. The server increments the file position in the state table, so the next request from the client will extract new data.

Thus, the client can send repeated requests to move through the entire file. When the client finishes using a file, it sends a message informing the server that the file will no longer be needed.

In response, the server removes the stored state information. As long as all messages travel reliably between the client and server, a stateful design makes the interaction more efficient.

The point is: In an ideal world, where networks deliver all messages reliably and computers never crash, having a server maintain a small amount of state information for each ongoing interaction can make messages smaller and processing simpler.

Although state information can improve efficiency, it can also be difficult or impossible to maintain correctly if the underlying network duplicates, delays, or delivers messages out of order e. Consider what happens to our file server example if the network duplicates a read request. Recall that the server maintains a notion of file position in its state information.

Assume that the server updates its notion of file position each time a client extracts data from a file. If the network duplicates a read request, the server will receive two copies. When the first copy arrives, the server extracts data from the file, updates the file position in its state information, and returns the result to the client.

When the second copy arrives, the server extracts additional data, updates the file position again, and returns the new data to the client. The client may view the second response as a duplicate and discard it, or it may report an error because it received two different responses to a single request. In either case, the state information at the server can become incorrect because it disagrees with the client's notion of the true state.

When computers reboot, state information can also become incorrect. If a client crashes after performing an operation that creates additional state information, the server may never receive messages that allow it to discard the information.

Eventually, the accumulated state information exhausts the server's memory. In our file server example, if a client opens files and then crashes, the server will maintain useless entries in its state table forever. A stateful server may also become confused or respond incorrectly if a new client begins operation after a reboot using the same protocol port numbers as the previous client that was operating when the system crashed.

Remember, however, that the underlying internet may duplicate and delay messages, so any solution to the problem of new clients reusing protocol ports after a reboot must also handle the case where a client starts normally, but its first message to a server becomes duplicated and one copy is delayed. In general, the problems of maintaining correct state can only be solved with complex protocols that accommodate the problems of unreliable delivery and computer system restart.

In a real internet, where machines crash and reboot, and messages can be lost, delayed, duplicated, or delivered out of order, stateful designs lead to complex application protocols that are difficult to design, understand, and program correctly. If the application protocol specifies that the meaning of a particular message depends in some way on previous messages, it may be impossible to provide a stateless interaction.

In essence, the issue of statelessness focuses on whether the application protocol assumes the responsibility for reliable delivery.

To avoid problems and make the interaction reliable, an application protocol designer must ensure that each message is completely unambiguous. That is, a message cannot depend on being delivered in order, nor can it depend on previous messages having been delivered. In essence, the protocol designer must build the interaction so the server gives the same response no matter when or how many times a request arrives. Mathematicians use the term idempotent to refer to a mathematical operation that always produces the same result.

We use the term to refer to protocols that arrange for a server to give the same response to a given message no matter how many times it arrives.

In an internet where the underlying network can duplicate, delay or deliver messages out of order or where computers running client applications can crash unexpectedly, the server should be stateless.

The server can only be stateless if the application protocol is designed to make operations idempotent. A server program may need to access network services that require it to act as a client. For example, suppose our file server program needs to obtain the time of day so it can stamp files with the time of access.

Also suppose that the system on which it operates does not have a time-of-day clock. To obtain the time, the server acts as a client by sending a request to a time-of-day server as Figure 2.

Of course, designers must be careful to avoid circular dependencies among servers. Beginners and most experienced programmers use TCP to transport messages between the client and server because it provides the reliability needed in an internet environment.

Keeping state information in the server can improve efficiency. However, if clients crash unexpectedly or the underlying transport system allows duplication, delay, or packet loss, state information can consume resources or become incorrect. Thus, most application protocol designers try to minimize state information.

A stateless implementation may not be possible if the application protocol fails to make operations idempotent. Programs cannot be divided easily into client and server categories because many programs perform both functions.

A program that acts as a server for one service can act as a client to access other services. Other examples can be found by consulting applications that accompany various vendors' operating systems.

Why is full parameterization needed? What happens on your local system? What happens if two or more clients access the same file? What happens if a client crashes before closing a file? Use the operations open, read, write, and close to access files.

Arrange for open to return an integer used to access the file in read and write operations. How do you distinguish duplicate open requests from a client that sends an open, crashes, reboots, and sends an open again? What errors can result if messages are lost, duplicated, or delayed?

This chapter extends the notion of client-server interaction by discussing concurrency, a concept that provides much of the power behind client-server interactions but also makes the software difficult to design and build. The notion of concurrency also pervades later chapters, which explain in detail how servers provide concurrent access.

In addition to discussing the general concept of concurrency, this chapter also reviews the facilities that an operating system supplies to support concurrent process execution. It is important to understand the functions described in this chapter because they appear in many of the server implementations in later chapters. For example, a multi-user computer system can achieve concurrency by time-sharing, a design that arranges to switch a single processor among multiple computations quickly enough to give the appearance of simultaneous progress; or by multiprocessing, a design in which multiple processors perform multiple computations simultaneously.

Concurrent processing is fundamental to distributed computing and occurs in many forms. Among machines on a single network, many pairs of application programs can communicate concurrently, sharing the network that interconnects them.

For example, application A on one machine may communicate with application B on another machine, while application C on a third machine communicates with application D on a fourth. Although they all share a single network, the applications appear to proceed as if they operate independently.

The network hardware enforces access rules that allow each pair of communicating machines to exchange messages. The access rules prevent a given pair of applications from excluding others by consuming all the network bandwidth. Concurrency can also occur within a given computer system. For example, multiple users on a timesharing system can each invoke a client application that communicates with an application on another machine.

One user can transfer a file while another user conducts a remote login session. From a user's point of view, it appears that all client programs proceed simultaneously. Figure 3. Client software does not usually require any special attention or effort on the part of the programmer to make it usable concurrently.

The application programmer designs and constructs each client program without regard to concurrent execution; concurrency among multiple client programs occurs automatically because the operating system allows multiple users to each invoke a client concurrently.

Thus, the individual clients operate much like any conventional program. Most client software achieves concurrent operation because the underlying operating system allows users to execute client programs concurrently or because users on many machines each execute client software simultaneously.

An individual client program operates like any conventional program; it does not manage concurrency explicitly. As figure 3.

To understand why concurrency is important, consider server operations that require substantial computation or communication. For example, think of a remote login server.

It it operates with no concurrency, it can handle only one remote login at a time. Once a client contacts the server, the server must ignore or refuse subsequent requests until the first user finishes.

Clearly, such a design limits the utility of the server, and prevents multiple remote users from accessing a given machine at the same time. Chapters 9 through 13 each illustrate one of the algorithms, describing the design in more detail and showing code for a working server. The remainder of this chapter concentrates on terminology and basic concepts used throughout the text.

This section explains the basic concept of concurrent processing and shows how an operating system supplies it.

It gives examples that illustrate concurrency, and defines terminology used in later chapters. The most essential information associated with a process is an instruction pointer that specifies the address at which the process is executing.

Other information associated with a process includes the identity of the user that owns it, the compiled program that it is executing, and the memory locations of the process' program text and data areas. A process differs from a program because the process concept includes only the active execution of a computation, not the code.

After the code has been loaded into a computer, the operating system allows one or more processes to execute it. In particular, a concurrent processing system allows multiple processes to execute the same piece of code "at the same time. Each process proceeds at its own rate, and each may begin or finish at an arbitrary time. Because each has a separate instruction pointer that specifies which instruction it will execute next, there is never any confusion.

The operating system makes the computer appear to perform more than one computation at a time by switching the CPU among all executing processes rapidly. From a human observer's point of view, many processes appear to proceed simultaneously. In fact, one process proceeds for a short time, then another process proceeds for a short time, and so on.

We use the term concurrent execution to capture the idea. It means "apparently simultaneous execution.

Related titles

The important concept is: Application programmers build programs for a concurrent environment without knowing whether the underlying hardware consists of a uniprocessor or a multiprocessor. Processes In a concurrent processing system, a conventional application program is merely a special case: The notion of process differs from the conventional notion of program in other ways.

For example, most application programmers think of the set of variables defined in the program as being associated with the code. However, if more than one process executes the code concurrently, it is essential that each process has its own copy of the variables.

To understand why, consider the following segment of C code that prints the integers from 1 to In a conventional program, the programmer thinks of storage for variable i as being allocated with the code. However, if two or more processes execute the code segment concurrently, one of them may be on the sixth iteration when the other starts the first iteration.

Each must have a different value for i. Thus, each process must have its own copy of variable i or confusion will result, To summarize: When multiple processes execute a piece of code concurrently, each process has its own, independent copy of the variables associated with the code. Subprograms accept arguments, compute a result, and then return just after the point of the call.

If multiple processes execute code concurrently, they can each be at a different point in the sequence of procedure calls. One process, A, can begin execution, call a procedure, and then call a second-level procedure before another process, B, begins. Process B may return from a first-level procedure call just as process A returns from a second-level call. The run-time system for procedure-oriented programming languages uses a stack mechanism to handle procedure calls.

The run-time system pushes a procedure activation record on the stack whenever it makes a procedure call. Among other things, the activation record stores information about the location in the code at which the procedure call occurs. When the procedure finishes execution, the run-time system pops the activation record from the top of the stack and returns to the procedure from which the call occurred.

Analogous to the rule for variables, concurrent programming systems provide separation between procedure calls in executing processes: As with most computational concepts, the programming language syntax is trivial; it occupies only a few lines of code.

For example, the following code is a conventional C program that prints the integers from I to 5 along with their sum: In essence, fork divides the running program into two almost identical processes, both executing at the same place in the same code. The two processes continue just as if two users had simultaneously started two copies of the application.

For example, the following modified version of the above example calls fork to create a new process. Note that although the introduction of concurrency changes the meaning of the program completely, the call to fork occupies only a single line of code.

However, when the process reaches the call to fork, the system duplicates the process and allows both the original process and the newly created process to execute. Of course, each process has its own copy of the variables that the program uses. In fact, the easiest way to envision what happens is to imagine that the system makes a second copy of the entire running program.

Then imagine that both copies run Oust as if two users had both simultaneously executed the program. To understand the fork function, imagine that fork causes the operating system to make a copy of the executing program and allows both copies to run at the same time.

On one particular uniprocessor system, the execution of our example concurrent program produces twelve lines of output: The value of i is 1 The value of i is 2 The value of i is 3 The value of i is 4 The value of i is 5 2 To a programmer, the call to fork looks and acts like an ordinary function call in C.

It is written fork. At run-time, however, control passes to the operating system, which creates a new process. Once the first process completed, the operating system switched the processor to the second process, which also ran to completion. The entire run took less than a second. Therefore, once a process gained control of the CPU, it quickly ran to completion.

If we examine concurrent processes that perform substantially more computation, an interesting phenomenon occurs: We use the term timeslicing to describe systems that share the available CPU among several processes concurrently. For example, if a timeslicing system has only one CPU to allocate and a program divides into two processes, one of the processes will execute for a while, then the second will execute for a while, then the first will execute again, and so on.

If the timeslicing system has many processes, it runs each for a short time before it runs the first one again.

A timeslicing mechanism attempts to allocate the available processing equally among all available processes. Thus, all processes appear to proceed at an equal rate, no matter how many processes execute. With many processes executing, the rate is low; with few, the rate is high. To see the effect of timeslicing, we need an example program in which each process executes longer than the allotted timeslice. Extending the concurrent program above to iterate 10, times instead of 5 times produces: However, instead of all output from the first process followed by all output from the second process, output from both processes is mixed together.

In one run, the first process iterated 74 times before the second process executed at all. Then the second process iterated 63 times before the system switched back to the first process. On subsequent timeslices, the processes each received enough CPU service to iterate between 60 and 90 times. Of course, the two processes compete with all other processes executing on the computer, so the apparent rate of execution varies slightly depending on the mix of programs running.

Creating a truly identical copy of a running program is neither interesting nor useful because it means that both copies perform exactly the same computation. In practice, the process created by fork is not absolutely identical to the original process: Fork is a function that returns a value to its caller.

When the function call returns, the value returned to the original process differs from the value returned to the newly created process. In the newly created process, the fork returns zero; in the original process, fork returns a small positive integer that identifies the newly created process.

Technically, the value returned is called a process identifier or process id3. Concurrent programs use the value returned by fork to decide how to proceed. In the most common case, the code contains a conditional statement that tests to see if the value returned is nonzero: Remember that each process has its own copy of all variables, and that fork will either return zero in the newly created process or nonzero in the original process.

Following the call to fork, the if statement checks variable pid to see whether the original or the newly created process is 3 Many programmers abbreviate process id as pid. The two processes each print an identifying message and exit. When the program runs, two messages appear: The value returned by fork differs in the original and newly created processes; concurrent programs use the difference to allow the new process to execute different code than the original process.

The mechanism that UNIX uses is a system call, execve4, that takes three arguments: Execve replaces the code that the currently executing process runs with the code from the new program. The call does not affect any other processes. Thus, to create a new process that executes the object code from a file, a process must call fork and execve. For example, whenever the user types a command to one of the UNIX command interpreters, the command interpreter uses fork to create a new process for the command and execve to execute the code.

Execve is especially important for servers that handle diverse services. To keep the code for each service separate from the code for other services, a programmer can build, write, and compile each service as a separate program. When the server needs to handle a particular service, it can use fork and execve to create a process that runs one of the programs.

Later chapters discuss the idea in more detail, and show examples of how servers use execve. To make sure that all processes proceed concurrently, the operating system uses timeslicing, switching the CPU or CPUs among processes so fast that it appears to a human that the processes execute simultaneously. When the operating system temporarily stops executing one process and switches to another, a context switch has occurred.

Switching process context requires use of the CPU, and while the CPU is busy switching, none of the application processes receives any service. Thus, we view context switching as overhead needed to support concurrent processing. To avoid unnecessary overhead, protocol software should be designed to minimize context switching. In particular, programmers must always be careful to ensure that the benefits of introducing concurrency into a server outweigh the cost of switching context among the concurrent processes.

Later chapters discuss the use of concurrency in server software, present nonconcurrent designs as well as concurrent ones, and describe circumstances that justify the use of each. In principle, select is easy to understand: As an example, imagine an application program that reads characters from a TCP connection and writes them to the display screen. The program might also allow the user to type commands on the keyboard to control how the data is displayed. Because a user seldom or never types commands, the program cannot wait for input from the keyboard - it must continue to read and display text from the TCP connection.

The user may type a command while the program is blocked waiting for input on the TCP connection. The problem is that the application cannot know whether input will arrive from the keyboard or the TCP connection first. To solve the dilemma, a UNIX program calls select. In doing so, it asks the operating system to let it know which source of input becomes available first.

The call returns as soon as a source is ready, and the program reads from that source. For now, it is only important to understand the idea behind select; later chapters present the details and illustrate its use. Concurrency in clients arises easily because multiple users can execute client application software at the same time.

Concurrency in servers is much more difficult to achieve because server software must be programmed explicitly to handle requests concurrently.

In UNIX, a program creates an additional process using the fork system call. We imagine that the call to fork causes the operating system to duplicate the program, causing two copies to execute instead of one.

Technically, fork is a function call because it returns a value. The only difference between the original process and a process created by fork lies in the value that the call returns. In the newly created process, the call returns zero; in the original process, it returns the small, positive integer process id of the newly created process.

Concurrent programs use the returned value to make new processes execute a different part of the program than the original process. A process can call execve at any time to have the process execute code from a separately-compiled program. Concurrency is not free.

When an operating system switches context from one process to another, the system uses the CPU. Programmers who introduce concurrency into server designs must be sure that the benefits of a concurrent design outweigh the additional overhead introduced by context switching.

Peterson and Silberschatz [] covers the general topic. Comer [ discusses the implementation of processes, message passing, and process coordination mechanisms. Leffler et. Approximately how many iterations of the output loop can a process make in a single timeslice?

Arrange for each process to print a few lines of output and then halt. What information does the newly created process share with the original process? Which version is easier to understand? This chapter considers general properties of the interface an application program uses to communicate in the client-server model. The following chapter illustrates these properties by giving details of a specific interface.

From a programmer's point of view, the routines the operating system supplies define the interface between the application and the protocol software, the application interface. In other words: On the positive side, it provides flexibility and tolerance. More important, it means designers can use either a procedural or message-passing interface style whichever style the operating system supports. On the negative side, a loose specification means that designers can make the interface details different for each operating system.

As vendors add new interfaces that differ from existing interfaces, application programming becomes more difficult and applications become less portable across machines. Thus, while system designers favor a loose specification, application programmers, desire a restricted specification because it means applications can be compiled for new machines without change. The University of California at Berkeley defined an interface for the Berkeley UNIX operating system that has become known as the socket interface, or sockets.

A few other interfaces have been defined, but none has gained wide acceptance yet. An interface must support the following conceptual operations: Because most operating systems use a procedural mechanism to transfer control from an application program into the system, the standard defines the conceptual interface as a set of procedures and functions. The standard suggests the parameters that each procedure or function requires as well as the semantics of the operation it performs.

The point of defining conceptual operations is simple: Because it does not prescribe exact details, operating system designers are free to choose alternative procedure names or parameters as long as they offer equivalent functionality.

To a programmer, system calls look and act like function calls. As the figure shows, when an application invokes a system call, control passes from the application to the system call interface. The interface then transfers control to the operating system. The operating system directs the incoming call to an internal procedure that performs the requested operation. Once the internal procedure completes, control returns through the system call interface to the application, which then continues to execute.

As it passes through the system call interface, the process acquires privileges that allow it to read or modify data structures in the operating system. The operating system remains protected, however, because each system call branches to a procedure that the operating system designers have written. Implementations follow one of two approaches: In the first approach, the designer makes a list of all conceptual operations, invents names and parameters for each, and implements each as a system call.

Because many designers consider it unwise to create new system calls unless absolutely necessary, this approach is seldom used. The table in Figure 4. The call to open takes three arguments: For example, the code segment: For example, the statement: Finally, when an application finishes using a file, it calls close to deallocate the descriptor and release associated resources e.

First, they extended the set of file descriptors and made it possible for applications to create descriptors used for network communication. Second, they extended the read and write system calls so they worked with the new network descriptors as well as with conventional file descriptors.

Thus, when an application needs to send data across a TCP connection, it creates the appropriate descriptor, and then uses write to transfer data. However, not all network communication fits easily into UNIX's open- read-writeclose paradigm. An application must specify the local and remote protocol ports and the remote IP address it will use, whether it will use TCP or UDP, and whether it will initiate transfer or wait for an incoming connection i.

If it is a server, it must specify how many incoming connection requests the operating system should enqueue before rejecting them. The next chapter shows the details of the design. The standards do discuss a conceptual interface, but it is intended only as an illustrative example. Although the standards present the conceptual interface as a set of procedures, designers are free to choose different procedures or to use an entirely different style of interaction e. Operating systems often supply services through a mechanism known as the system call interface.

How would you extend the application program interface to accommodate network communication? What are the major differences? How are the two similar?

You might also like: ANTONYMS LIST WITH MEANING PDF

What reasons could designers have for choosing one design over the other? How many system calls have already been assigned in your local operating system? How can a system designer add additional system calls without changing the hardware? Write an example script. It covers concepts in general, and gives the intended use of each call.

Later chapters show how clients and servers use these calls, and provide examples that illustrate many of the details. As part of the project, the designers created an interface that applications use to communicate. TCP first appeared in release 4. Because many computer vendors, especially workstation manufacturers like Sun Microsystems Incorporated, Tektronix Incorporated, and Digital Equipment Corporation, adopted Berkeley UNIX, the socket interface has become available on many machines.

Consequently, the socket interface has become so widely accepted that it ranks as a de facto standard. In so doing, they decide the scope of services that the functions supply and the style in which applications use them. Thus, the designers must choose one of two broad approaches: Differences between the two approaches are easiest to understand by their impact on the names of system functions and the parameters that the functions require.

For example, in the first approach, a designer might choose to have a system function named maketcpconnection, while in the second, a designer might choose to create a general function makeconnection and use a parameter to specify the TCP protocol.

Because the designers at Berkeley wanted to accommodate multiple sets of com- munication protocols, they used the second approach. They also decided to have applications specify operations using a type of service required instead of specifying the protocol name.

Thus, instead of specifying that it wants a TCP connection, an application requests the stream transfer type of service using the Internet family of protocols.

The calls allow the programmer to specify the type of service required rather than the name of a specific protocol. The overall design of sockets and the generality they provide have been debated since their inception.

Some computer scientists argue that generality is unnecessary and merely makes application programs difficult to read. Others argue that having programmers specify the type of service instead of the specific protocol makes it easier to program because it frees the programmer from understanding the details of each protocol family.

As Figure 5. The system maintains a separate file descriptor table for each process. When a process opens a file, the system places a pointer to the internal data structures for that file in the process' file descriptor table and returns the table index to the caller.

The application program only needs to remember the descriptor and to use it in subsequent calls that request operations on the file. The operating system uses the descriptor as an index into the process' descriptor table, and follows the pointer to the data structures that hold all information about the file. The socket interface adds a new abstraction for network communication, the socket. Like files, each active socket is identified by a small integer called its socket descriptor.

UNIX allocates socket descriptors in the same descriptor table as file descriptors. Thus, an application cannot have both a file descriptor and a socket descriptor with the same value.

BSD UNIX contains a separate system function, socket, that applications call to create a socket; an application only uses open to create file descriptors. The general idea underlying sockets is that a single system call is sufficient to create any socket. Once the socket has been created, an application must make additional system calls to specify the details of its exact use. The paradigm will become clear after we examine the data structures the system maintains.

When an application calls socket, the operating system allocates a new data structure to hold the information needed for communication, and fills in a new descriptor table entry to contain a pointer to the data structure. For example, Figure 5. Although the internal data structure for a socket contains many fields, the system leaves most of them unfilled when it creates the socket. As we will see, the application that created the socket must make additional system calls to fill in information in the socket data structure before the socket can be used.

A socket used by a server to wait for an incoming connection is called a passive socket, while a socket used by a client to initiate a connection is called an active socket. The only difference between active and passive sockets lies in how applications use them; the sockets are created the same way initially. In particular, the socket does not contain information about the protocol port numbers or IP addresses of either the local machine or the remote machine.

Before an application uses a socket, it must specify one or both of these addresses.

Other protocol families define their endpoint addresses in other ways. Because the socket abstraction accommodates multiple families of protocols, it does not specify how to define endpoint addresses nor does it define a particular protocol address format. Instead, it allows each protocol family to specify endpoints however it likes. To allow protocol families the freedom to choose representations for their addresses the socket abstraction defines an address family for each type of address.

A protocol family can use one or more address families to define address representations. The chief problem is that both symbolic constants have the same numeric value 2 , so programs that inadvertent ly use one in place of the other operate correctly. Programmers should observe the distinction, however, because it helps clarify the meaning of variables and makes programs more portable.

For example, it may be necessary to write a procedure that accepts an arbitrary protocol endpoint specification as an argument and chooses one of several possible actions depending on the address type.

To accommodate such programs, the socket system defines a generalized format that all endpoint addresses use.

The generalized format consists of a pair: In practice, the socket software provides declarations of predefined C structures for address endpoints. Application programs use the predefined structures when they need to declare variables that store endpoint addresses or when they need to use an overlay to locate fields in a structure. The most general structure is known as a sockaddr structure. It contains a 2-byte address family identifier and a byte array to hold an address3: Each protocol family that uses sockets defines the exact representation of its endpoint addresses, and the socket software provides corresponding structure declarations.

This section describes the calls that provide the primary functionality that clients and servers need. The details of socket system calls, their parameters, and their semantics can seem overwhelming.

Much of the complexity arises because sockets have parameters that allow programs to use them in many ways. A socket can be used by a client or by a server, for stream transfer TCP or datagram UDP communication, with a specific remote endpoint address usually needed by a client or with an unspecified remote endpoint address usually needed by a server.

To help understand sockets, we will begin by examining the primary socket calls and describing how a straightforward client and server use them to communicate with TCP.

Later chapters each discuss one way to use sockets, and illustrate many of the details and subtleties not covered here. The call returns a descriptor for the newly created socket.

Arguments to the call specify the protocol family that the application will use e. For a socket that uses the Internet protocol family, the protocol or type of service argument determines whether the socket will use TCP or UDP. An argument to connect allows the client to specify the remote endpoint, which includes the remote machine's IP address and protocol port number. Once a connection has been made, a client can transfer data across it.

Clients usually use write to send requests, while servers use it to send replies. A call to write requires three arguments.

The application passes the descriptor of a socket to which the data should be sent, the address of the data to be sent, and the length of the data. Usually, write copies outgoing data into buffers in the operating system kernel, and allows the application to continue execution while it transmits the data across the 4 Structure sockaddr is used to cast i.

If the system buffers become full, the call to write may block temporarily until TCP can send data across the network and make space in the buffer for new data. Usually, after a connection has been established, the server uses read to receive a request that the client sends by calling write. After sending its request, the client uses read to receive a reply.

To read from a connection, an application calls read with three arguments. The first specifies the socket descriptor to use, the second specifies the address of a buffer, and the third specifies the length of the buffer.

Read extracts data bytes that have ar rived at that socket, and copies them to the user's buffer area. If no data has arrived, the call to read blocks until it does. If more data has arrived than fits into the buffer, read only extracts enough to fill the buffer. If less data has arrived than fits into the buffer, read extracts all the data and returns the number of bytes it found.

Clients and servers can also use read to receive messages from sockets that use UDP. As with the connection-oriented case, the caller supplies three arguments that identify a socket descriptor, the address of a buffer into which the data should be placed, and the size of the buffer.

Each call to read extracts one incoming UDP message i. If the buffer cannot hold the entire message, read fills the buffer and discards the remainder. If only one process is using the socket, close immediately terminates the connection and deallocates the socket. If several processes share a socket, close decrements a reference count and deallocates the socket when the reference count reaches zero.

Navigation menu

An application calls bind to specify the local endpoint address for a socket. The call takes arguments that specify a socket descriptor and an endpoint address. Primarily, servers use bind to specify the well-known port at which they will await connections. Connection-oriented servers call listen to place a socket in passive mode and make it ready to accept incoming connections.

Most servers consist of an infinite loop that accepts the next incoming connection, handles it, and then returns to accept the next connection. Even if handling a given connection takes only a few milliseconds, it may happen that a new connection request arrives during the time the server is busy handling an existing request. To ensure that no connection request is lost, a server must pass listen an argument that tells the operating system to enqueue connection requests for a socket.

Thus, one argument to the listen call specifies a socket to be placed in passive mode, while the other specifies the size of the queue to be used for that socket. An argument to accept specifies the socket from which a connection should be accepted. Accept creates a new socket for each new connection request, and returns the descriptor of the new socket to its caller.

The server uses the new socket only for the new connection; it uses the original socket to accept additional connection requests.

After it finishes using the new socket, the server closes it. The representation, known as network byte order, represents integers with the most significant byte first. Although the protocol software hides most values used in headers from application programs, a programmer must be aware of the standard because some socket routines require arguments to be stored in network byte order. The socket routines include several functions that convert between network byte order and the local host's byte order.

Programs should always call the conversion routines even if the local machine's byte order is the same as the network byte order because doing so makes the source code portable to an arbitrary architecture. The conversion routines are divided into short and long sets to operate on bit integers and bit integers. Functions htons host to network short and ntohs network to host short convert a short integer from the host's native byte order to the network by to order, and vice versa.

Similarly, htonl and ntohl convert long integers from the host's native byte order to network byte order and vice versa. Doing so makes the source code portable to any machine, regardless of its native byte order. The client creates a socket, calls connect to connect to the server, and then interacts using write to send requests and read to receive replies. When it finishes using the connection, it calls close.

A server uses bind to specify the local well-known protocol port it will use, calls listen to set the length of the connection queue, and then enters a loop. Inside the loop, the server calls accept to wait until the next connection request arrives, uses read and write to interact with the client, and finally uses close to terminate the connection.

The server then returns to the accept call, where it waits for the next connection.

Douglas E. Comer + David L. Stevens

To do so, the program must incorporate the appropriate definitions into each program with the C preprocessor include statement. Usually, include statements appear at the beginning of a source file; they must appear before any use of the constants they define.After the new mask has been recorded. The host can support multicasting on multiple interfaces. In addition to its description of algorithms for client and server software, the text presents general techniques like tunneling, application-level gateways, and remote procedure calls.

If the user does not supply a second argument, telnet uses port In either case, the receiver ignores the packet except to retransmit an acknowledgment. Chapter 5 explains the concept of network byte order, and describes library routines that convert from network byte order to the byte order used on the local machine.

SCARLET from Thousand Oaks
Look over my other articles. I enjoy skiboarding. I do love reading novels exactly .
>