Introduction:- Client/Server is one of the computer Industries newest and hottest buzzwords. There is no generic definition of client/server as it is used to depist number of nature, developing, and anticipateologies. However the general idea is that clients and servers are separate logical entities that work together Attention over a network to accomplish a task. Client-server is very fashionable. As such, it might be just a temporary fad; but there is general recognition that it is something fundamental and far-reaching; for example, the Gartner Group, who are leading industry analysts in this field, have predicted that
“By 1995 client-server will be a synonym for computing.” Most of the initial client/server success stories involve small-scale applications that provide direct or indirect access to transactional data in legacy systems. The business need to provide data access to decision makers, the relative immaturity of client/server tools and technology, the evolving use of wide area networks and the lack of client/server expertise make these attractive yet low risk pilot ventures. As organizations move up the learning curve from these small-scale projects towards mission-critical applications, there is a corresponding increase in performance expectations, uptime requirements and in the need to remain both flexible and scalable. In such a demanding scenario, the choice and implementation of appropriate architecture becomes critical. In fact one of the fundamental questions that practitioners have to contend with at the start of every client/server project is – “Which architecture is more suitable for this project – Two Tier or Three Tier?”. Interestingly, 17% of all mission-critical client/server applications are three tiered and the trend is growing, according to Standish Group International, Inc., a market research firm. Architecture affects all aspects of software design and engineering. The architect considers the complexity of the application, the level of integration and interfacing required, the number of users, their geographical dispersion, the nature of networks and the overall transactional needs of the application before deciding on the type of architecture. An inappropriate architectural design or a flawed implementation could result in horrendous response times. The choice of architecture also affects the development time and the future flexibility and maintenance of the application. Current literature does not adequately address all these aspects of client/server architecture. This paper defines the basic concepts of client/server architecture, describes the two tier and three tier architectures and analyzes their respective benefits and limitations. Differences in development efforts, flexibility and ease of reuse are also compared in order to aid further in the choice of appropriate architecture for any given project.
History & defintion:-
History The University of Waterloo implemented Oracle Government Financials (OGF) in May of 1996. That moved UW’s core accounting systems to a vendor-supported package on a Solaris/Unix environment and away from locally developed package(s) on IBM/VM. Plans at that time were to move more (if not all) business systems to a single vendor and to standardize on a single Data Base platform (Oracle for both). A very large state of the art Solaris system was purchased with the intention of co-locating these other Oracle supplied services on the same system with the OGF. Network security architecture was planned that involved isolating administrative networks, fire walling those networks with protocol filters and active traffic monitoring. Systems were purchased and deployed to implement that security architecture. Much has changed in the interim. While the OGF now includes more services beyond the 1996 suite the plans to move all business systems has failed. Notably, we require People Soft/HRMS (Human Resources Management System) for Payroll (deployed in fourth quarter 1998) with People Soft/SIS (Student Information Services) to follow some years hence—Oracle was unable to deliver these key components for our business. Also we’ve discovered, while it’s reasonable to require Oracle as the Data Base when other applications are specified, it’s unreasonable to expect that they will be certified with the same versions of the Oracle Data Base and/or the underlying operating system. Technology changes quickly too: the state of the art Solaris system is no longer current. Networks were restructured to isolate administrative systems in the “Red Room” and administrative users throughout the campus. However, the administrative firewall and active traffic monitor was never implemented – recently it’s been dismantled.
Definition: Despite the massive press coverage of client/server computing, there is much confusion around defining what client/server really is. Client and server are software and not hardware entities. In its most fundamental form, client/server involves a software entity (client) making a specific request, which is fulfilled, by another software entity (server). Figure 1 illustrates the client/server exchange. The client process sends a request to the server. The server interprets the message and then attempts to fulfill the request. In order to fulfill the request, the server may have to refer to a knowledge source (database), process data (perform calculations), control a peripheral, or make an additional request of another server. In much architecture, a client can make requests of multiple servers and a server can service multiple clients.
Figure 1 – Client/Server Transactions
It is important to understand that the relationship between client and server is a command/control relationship. In any given exchange, the client initiates the request and the server responds accordingly. A server cannot initiate dialog with clients. Since the client and server are software entities they can be located on any appropriate hardware. A client process, for instance, could be resident on a network server hardware, and request data from a server process running on another server hardware or even on a PC. In another scenario, the client and server processes can be located on the same physical hardware box. In fact, in the prototyping stage, a developer may choose to have both the presentation client and the database server on the same PC hardware. The server can later be migrated (distributed) to a larger system for further pre-production testing after the bulk of the application logic and data structure development is complete. Although the client and server can be located on the same machine, this paper is concerned primarily with architectures used to create distributed applications, i.e. those where the client and server are on separate physical devices. According to Beaver (et al.), a distributed application consists of separate parts that execute on different nodes of the network and cooperate in order to achieve a common goal. The supporting infrastructure should also render the inherent complexity of distributed processing invisible to the end-user. The client in client/server architecture does not have to sport a graphical user interface (GUI), however, the mass-commercialization of client/server has come about in large part due to the proliferation of GUI clients. Some client/server systems support highly specific functions such as print spooling (i.e. network print queues) or presentation services (i.e. X-Window). While these special purpose implementations are important, this paper is predominantly concerned with the distributed client/server architectures that demand flexibility in functionality
Meaning of client-server:-
Business meaning of client-server:- Client-server is generally perceived to be the next step forward in the operational effectiveness of business information systems. This is illustrated in figure 1, which indicates cumulative gains from a succession of innovations. Business computing started in the 1960s with batch processing. The main innovation in the 1970s was on-line transaction processing (OLTP), which brought information technology (IT) to the desktop, and made it an integral part of business processes. Batch processing and OLTP in combination continue to be at the core of most enterprise’s information systems. Then in the 1980s came personal computing, which made IT universally affordable and dispersed it throughout business enterprises. Now in the 1990s, client-server is generally perceived to be the way of integrating the separate parts of information systems back together. That is its role and its importance.
Figure 1 Perceived business impact of client-server In these circumstances client-server (or client/server) has become a popular brand name that is applied to almost every kind of product, and to all manner of business and technical insights and marketing messages. This tends to drain it of specific meaning; but in doing so, actually confirms its near-universal applicability.
Technical meaning of client-server:- A useful starting point for understanding client-server is the informal definition used by the Gartner Group:
“Client-server is the splitting of an application into tasks that are performed on separate computers, one of which is a programmable workstation (e.g. a PC).” This definition says that client-server is about distributed computing and software architecture (applications are split into tasks that may be on separate computers). It echoes the vital point that client-server is the way to integrate PCs into all kinds of information systems.
Three Generations of Messaging:
Host Based architecture (not a client/server architecture):
With mainframe software architectures all intelligence is within the central host computer. Users interact with the host through a terminal that captures keystrokes and sends that information to the host. Mainframe software architectures are not tied to a hardware platform. User interaction can be done using PCs and UNIX workstations. A limitation of mainframe software architectures is that they do not easily support graphical user interfaces or access to multiple databases from geographically dispersed sites. In the last few years, mainframes have found a new use as a server in distributed client/server architectures
Lan File sharing architecture (not a client/server architecture):- The original PC networks were based on file sharing architectures, where the server downloads files from the shared location to the desktop environment. The requested user job is then run (including logic and data) in the desktop environment. File sharing architectures work if shared usage is low, update contention is low, and the volume of data to be transferred is low. In the 1990s, PC LAN (local area network) computing changed because the capacity of the file sharing was strained as the number of online user grew (it can only satisfy about 12 users simultaneously) and graphical user interfaces (GUIs) became popular (making mainframe and terminal displays appear out of date). PCs are now being used in client/server architectures
Internet Client Server Architecture:- The goal for this class is to build a base of background knowledge that will underlie the rest of the course. In many areas of technology, one gets the impression that the technology as always existed in its current form. But, of course, technology has a history just like any other natural or unnatural phenomenon. So it is for the Internet and the Worldwide Web. During this discussion, we will look first at some of the important developments that have taken- place over the past thirty years that have made the Internet what it is today. After reviewing this chronology, we will look at two of the underlying technologies that support the Internet. The first is Ethernet, the original local area network (LAN) technology and still one of the most prevalent communication systems used to connect computers that are within a few hundred yards of one another. The second is TCP/IP, the software standard that enables computers located around the world to direct messages to one another and to communicate reliably. After discussing the Internet, we will then turn our attention to the World Wide Web, itself. The discussion begins with a review of its basic client/server architecture, in which a client program running on one computer communicates with a server program running on another to request some particular information or that some service is performed. The Web was built using a client/server architecture in which a Web browser (client) communicates with various Web servers to request pages of information or that a program be run through the server’s Common Gateway Interface (CGI). As the Internet/WWW becomes a more general computing and communications infrastructure, this strict client/server relationship is being expanded. One such expansion involves Java. More about these recent developments later in the course, but for now, we will concentrate on the Web’s classic client/server design. The language Web clients and servers speak to one another is called HTTP (Hypertext Transfer Protocol). You will not have to learn HTTP in detail, but you will have to construct basic HTTP messages in order to do CGI programming and you should understand its underlying philosophy and its basic form and capabilities.
Client Process:- The client is a process (program) that sends a message to a server process (program), requesting that the server perform a task (service). Client programs usually manage the user-interface portion of the application, validate data entered by the user, dispatch requests to server programs, and sometimes execute business logic. The client-based Process is the front- end of the application that the user sees and interacts with. The client process contains solution-specific logic and provides the interface between the user and the rest of the application system. The client process also manages the local resources that the user interacts with such as the monitor, keyboard, workstation CPU and peripherals. One of the key elements of a client workstation is the graphical user interface (GUI). Normally a part of operating system i.e. the window manager detects user actions, manages the windows on the display and displays the data in the windows.
Server Process:- A server process (program) fulfills the client request by performing the task requested. Server programs generally receive requests from client programs, execute database retrieval and updates, manage data integrity and dispatch responses to client requests. Sometimes server programs execute common or complex business logic. The server-based process “may” run on another machine on the network. This server could be the host operating system or network file server; the server is then provided both file system services and application services. Or in some cases, another desktop machine provides the application services. The server process acts as a software engine that manages shared resources such as databases, printers, communication links, or high powered-processors. The server process performs the back-end tasks that are common to similar applications.
• Single client, single server
• Multiple clients, single server
Client/server is a computational architecture that involves client processes requesting service from server processes Client/server computing is the logical extension of modular programming. Modular programming has as its fundamental assumption that separation of a large piece of software into its constituent parts (“modules”) creates the possibility for easier development and better maintainability. Client/server computing takes this a step farther by recognizing that those modules need not all be executed within the same memory space. With this architecture, the calling module becomes the “client” (that which requests a service), and the called module becomes the “server” (that which provides the service). The logical extension of this is to have clients and servers running on the appropriate hardware and software platforms for their functions. For example, database management system servers running on platforms specially designed and configured to perform queries, or file servers running on platforms with special elements for managing files.
Network Computing Architecture:-
Oracle’s Network Computing Architecture (NCA) can be captured by three concepts:
1.The World Wide Web is a truly ubiquitous service.
2.The Java Virtual Machine is (or will soon become) a truly ubiquitous service embedded within the Web-browser.
3.A three-tiered model for application delivery with an Oracle Data Base engine (on a large Unix server), a light weight Java application on the client, and a mid-tier “forms” server to provide the gateway between the two.
Oracle began shipping Release 10.7 NCA (the web-deployed applications) in January 1998…. 2000 require
With Release 10.7 NCA, Oracle responded to customer feedback on the difficulty of patching in Smart Client. Although the functionality is the same between 10 SC and 10 NCA, Oracle returned in the web-deployed release to a more granular patching strategy. This strategy also better preserves customizations. Since the forms technology runs on the server in the web-deployed release, relining and regenerating after applying patches is now easier. Due to the differences in patching strategy, Oracle recommends customers not use Smart Client and Release 10.7 NCA in the same instance. Oracle will not support such a configuration. Customerncharactermode installations should migrate directly to the web-deployed release
Client-Server Technology:- Client-server technology is best understood if we discuss it in four areas:
4.Client-server tools and services
Each of these areas is distinctive, although there can be overlap between them.
The term platform is used here to refer to a computer platform that is a complete combination of hardware and operating system software.
Personal platforms:- Personal platforms are perhaps the most distinctive area of client-server technology. We define a personal platform as:
A computer platform, which is connected to a network, provides a consistent and intuitive user interface and assisting a personal user to accomplish tasks on behalf of the enterprise. These characteristics are illustrated in figure 2. Personal platforms are relatively inexpensive and immensely powerful, and there is a wide choice of suppliers. Many different kinds of computers can be personal platforms (e.g. MS/DOS PC, Windows PC, OS/2 PC, UNIX workstation, Apple Macintosh, and various hand-held devices); but the most common case today is an IBM-compatible PC with the Microsoft Windows operating system.
Such platforms are now universally affordable wherever they are needed. This has turned the architecture of computer systems inside out: the old focus was scarce resources in the central machine, remote from its users; the new focus is the abundant personal resources now at the fingertips of each individual user. This trend has ever-increasing force, because PC price/performance ratios continue to improve by a factor of two every eighteen months or so. This change of focus aligns with changes in business structure: organizational hierarchies are being flattened, decision-making authority is being devolved, and IT-enabled processes can now provide processes that were formerly provided by office staff. A combined effect of these business and technical trends is personal empowerment of the individual at the desk. PCs provide personal productivity and independence, but this individuality, multiplied by huge numbers of PCs, can also create anarchy. Client-server helps to resolve these problems. The client’s use shared resources (provided on server platforms), not just personal resources; client-server structure enables all the software and hardware resources to be under architectural and management control. It transforms personal computing into inter-personal computing and enterprise-wide computing. These characteristics help to create order, workgroup cohesion, productivity, and flexibility of business process. Although personal platforms are the main economic and technical driving force for the move to client-server, they are only the first of the five technical ingredients identified at the start of section 2.
We define a server platform as:
A computer platform on which software provides IT services for use elsewhere in the system. Ultimately the services are for use at personal platforms; but services are also provided for use at other server platforms. A server platform may provide services via dependent terminals that do not qualify as personal platforms. Almost all kinds of computer platform can act as server platforms. Therefore, there are many different suppliers, and many possible kinds of server platforms, from super computers to PCs. Each is good for particular kinds of workloads, for different qualitative requirements, and in different areas of the price and performance spectrum. User enterprises can select different platforms to match different needs. This breadth of choice is illustrated in figure 3, which shows that the user at a personal platform may have access to services on many server platforms. This also illustrates the shift of focus onto the individual user at a personal platform, who may now choose IT services from many different sources elsewhere in the computer network. The polarization of systems into client and server platforms recognizes distinctions between personal and shared resources. Each personal platform is an independent personal resource, which may be mobile and is exposed to risks of accidental loss or damage. Conversely, a server platform provides a protected, fixed, and carefully managed environment for shared resources.
Figure 3 many server platforms to choose from
Even where the same technology is used for client and server platforms (e.g. PCs with the same kind of hardware and operating system), these distinctions between personal and shared resources should be made. In the limit, the same machine may be both a personal platform and a server platform (e.g. in a peer-to-peer network; see 3.2). As always, the server role brings obligations to guarantee availability and integrity of the shared resources.
We define client-server middleware as:
Packaged software to support the separate parts of client-server application software and enable them to work together. This is by far the most complex area of client-server technology. By concentrating the complexity here we are able to keep the other areas relatively simple. It includes many kinds of function, each of which may itself be distributed, and most of which are inter-related. Some of the main areas are:
• Networking services
• Distributed application services
• Distributed systems management
• Distributed security
• Distributed object management
• User interface management
• Print management
• Data management
• Transaction management
• Workflow management Figure 4 is a symbolic representation of this middleware support for client-server application software. It emphasizes the importance of middleware in enabling client-server technology to operate across the whole business scope relevant to the user’s tasks. This may involve interaction across departmental and functional boundaries, and perhaps across enterprise boundaries.
Figure 4 Client-server middleware
Client-Server tools and services:- Client-server systems may be complex, but with well-integrated systems and well-designed user interfaces the technical complexity should not be visible to the user; it is essentially a problem for the application developer and service provider. They need software development tools and professional services to help manage and hide this complexity. Many of the tools and services needed are the same as always, but there are also needs specific to client-server systems. An important general point is that for packaged (“shrink-wrapped”) application software, the user enterprise does not need program construction tools. Packaged client-server application products are now becoming widely available (e.g. distributed office and groupware applications, business accounting applications, personnel and payroll applications). Another important trend is that different tools (and languages) are needed for different parts of modular application systems. The main distinctions are:
• User interface: languages and tools for construction of graphical user interfaces and any application logic intimately associated with them; e.g. GUI tools and Visual Basic.
• Database: languages and tools for the construction of databases, file systems and object stores, and construction of the application logic intimately associated with them; e.g. Data Manipulation Languages and Relational Database 4GLs.
• Business logic: languages and tools for the construction of application logic that is logically separate from user interfaces and databases; e.g. COBOL.
• Distributed processing: languages and tools specialized for distributed processing, and for spanning all the above functional areas (and other technological and organizational boundaries); e.g. Remote Procedure Call (RPC) tools.
• System management: methods and tools for electronic distribution of software, and operation and tuning of client-server systems.
Most of these tools are associated with the corresponding areas of middleware.
Client-Server architecture:- By looking back over the technology described in the previous section, three kinds of client-server architecture can be discerned.
Basic client-server:- In basic client-server architecture, a personal or centralized application is split into two parts: a client part on a personal platform, and a server part on a server platform. The latter is often a shared resource, such as a filing service, a printing service, a database, or some application-specific function. The terms client and server are used to refer to the hardware platforms and the application software components (often somewhat ambiguously). Basic client-server architecture is illustrated in figure 9 (and has already been shown in more detail in figure 6).
Figure 9 Basic client-server architecture Basic client-server configurations are normally organized around a local area network (LAN). The whole assembly is usually described as a PC-LAN, and consists of many PCs for personal use (personal platforms), plus one or more shared PCs (server platforms). The local server platforms on these PC-LANs usually provide gateways into enterprise-wide and external networks, and to the servers on them. This is illustrated in figure 10.
Figure 10 a typical PC-LAN Although primarily expressed in terms of PCs and PC-LANs, these basic client-server concepts are applicable to all kinds of computers and networks (e.g. PCs, UNIX, mainframes, LANs and WANs).
Beyond the basics:- Beyond basic client-server there is peer-to-peer processing, co-operative processing and standalone processing. The term peer-to-peer processing is used to refer to configurations in which there are no server platforms, and the server parts of applications are located on personal platforms. Networks operating on this basis are referred to as peer-to-peer networks. This is a low-cost way of implementing small PC-LANs, etc.; but the lack of separate server platforms reduces system integrity and leads to system management difficulties. The term co-operative processing is used to refer to configurations in which application software is distributed over separate server platforms, and the client and server ends of interactions are both on server platforms. This includes interaction between separate applications, not just between parts of the same application. The term stand-alone processing is used to refer to configurations in which all parts of an application are on one platform (usually a personal platform). Any client-server relationships between the parts are not externally visible. People also use the terms peer-to-peer and co-operative processing interchangeably, and with various other meanings. This causes confusion and misunderstandings. There are also various other less well-known formulations such as server/requester and producer/consumer. All the main formulations are illustrated together in figure 11.
Figure 11 various formulations of client-server system structure Unfortunately, many people sharply differentiate the other concepts from client-server (by which they really mean basic client-server). This obscures the vital point that all are variants within one unified structure: client-server architecture. It also leads to misleading statements to the effect that client-server (meaning basic client-server) is defunct, and is being superseded by other techniques such as co-operative processing.
General client-server architecture:- A fundamental limitation of basic client-server and of all the formulations in 3.1 and 3.2 is that they define software configuration in ways dependent on hardware configuration. Furthermore, it is often ambiguous whether the terms client and server refer to the software or the hardware. To escape from these limitations and ambiguities, client-server relationship in software should be defined independently of software location, and independently of any classification of the underlying hardware as clients or servers. The essential clarification is that client and server are roles in which services are used and provided (respectively), and these roles occur in a relationship between autonomous building blocks. In such a relationship, one of the participants uses a service (it has the client role) and another provides the service (it has the server role). This is a client-server relationship. Large and flexible configurations can be built up by combination of these simple concepts. This is illustrated in figure 12.
Figure 12 Principles of client-server architecture As indicated in the right hand side of the diagram, a building block may be both user and provider of services. Therefore, it may have client and server roles and may participate in many client-server relationships with other building blocks. It is client or server only in the context of the particular relationship considered. The realization of client-server architecture in software is via programming languages and middleware (not shown in figure 12). The physical realization of client-server architecture consists of networks of separate computers; consequently the term client-server tends to become a synonym for distributed processing. Client-server architecture is only incidentally about PCs, or use of any other particular kind of technology. However, in current circumstances, it is usually appropriate that client-server is viewed mainly in terms of exploiting PC technology (as in the Gartner definition which we started with in 1.2 above). This general form of client-server architecture (autonomous building-blocks, client-server relationships, client role, server role) is a fundamental ingredient of OPEN framework application architecture.
1.One client is connected to at most one server at a time. [The customer later refuted this assumption.]
2.Replication is a secondary effect of the existing fat-client architecture; we assume that updates to one server are automatically propagated in a timely fashion.
3.A single client may have more than one session. [Replaced Assumption 1.]
4.All calculated columns (columns that represent behavior rather than aspects) are easily & quickly calculated on the server.
5.Deletion or insertion of a row forces a window update on the client.
6.Transmission of client-server traffic is out of scope.
Architecture Types:- When considering a move to client/server computing, whether it is to replace existing systems or introduce entirely new systems, practitioners must determine which type of architecture they intend to use. The vast majority of end user applications consist of three components: presentation, processing, and data. The client/server architectures can be defined by how these components are split up among software entities and distributed on a network. There are a variety of ways for dividing these resources and implementing client/server architectures. This paper will focus on the most popular forms of implementation of two-tier and three-tier client/server computing systems. Two-tier Architecture:- Although there are several ways to architect a two-tier client/server system, we will focus on examining what is overwhelmingly the most common implementation. In this implementation, the three components of an application (presentation, processing, and data) are divided between two software entities (tiers): client application code and database server (Figure 2). A robust client application development language and a versatile mechanism for transmitting client requests to the server are essential for a two-tier implementation. Presentation is handled exclusively by the client, processing is split between client and server, and data is stored on and accessed via the server. The PC client assumes the bulk of responsibility for application (functionality) logic with respect to the processing component, while the database engine – with its attendant integrity checks, query capabilities and central repository functions – handles data intensive tasks. In a data access topology, a data engine would process requests sent from the clients. Currently, the language used in these requests is most typically a form of SQL. Sending SQL from client to server requires a tight linkage between the two layers. To send the SQL the client must know the syntax of the server or have this translated via an API (Application Program Interface). It must also know the location of the server, how the data is organized, and how the data is named. The request may take advantage of logic stored and processed on the server, which would centralize global tasks such as validation, data integrity, and security. Data returned to the client can be manipulated at the client level for further sub selection, business modeling, “what if” analysis, reporting, etc.
Figure 2 – Data Access Topology for two-tier architecture. Majority of functional logic exists at the client level The most compelling advantage of a two-tier environment is application development speed. In most cases a two-tier system can be developed in a small fraction of the time it would take to code a comparable but less flexible legacy system. Using any one of a growing number of PC-based tools, a single developer can model data and populate a database on a remote server, paint a user interface, create a client with application logic, and include data access routines. Most two-tier tools are also extremely robust. These environments support a variety of data structures, including a number of built in procedures and functions, and insulate developers from many of the more mundane aspects of programming such as memory management. Finally these tools also lend themselves well to iterative prototyping and rapid application development (RAD) techniques, which can be used to ensure that the requirements of the users are accurately and completely met. Tools for developing two-tier client/server systems have allowed many IS organizations to attack their applications backlog, satisfying pent-up user demand by rapidly developing and deploying what are primarily smaller workgroup-based solutions. Two-tier architectures work well in relatively homogeneous environments with fairly static business rules. This architecture is less suited for dispersed, heterogeneous environments with rapidly changing rules. As such, relatively few IS organizations are using two-tier client/server architectures to provide cross-departmental or cross-platform enterprise-wide solutions Since the bulk of application logic exists on the PC client, the two-tier architecture faces a number of potential version control and application re-distribution problems. A change in business rules would require a change to the client logic in each application in a corporation’s portfolio, which is affected, by the change. Modified clients would have to be re-distributed through the network – a potentially difficult task given the current lack of robust PC version control software and problems associated with upgrading PCs that are turned off or not “docked” to the network. System security in the two-tier environment can be complicated since a user may require a separate password for each SQL server accessed. The proliferation of end-user query tools can also compromise database server security. The overwhelming majority of client/server applications developed today are designed without sophisticated middleware technologies, which offer increased security. Instead, end-users are provided a password, which gives them access to a database. In many cases this same password can be used to access the database with data-access tools available in most commercial PC spreadsheet and database packages. Using such a tool, a user may be able to access otherwise hidden fields or tables and possibly corrupt data. Client tools and the SQL middleware used in two-tier environments are also highly proprietary and the PC tools market is extremely volatile. The client/server tools market seems to be changing at an increasingly unstable rate. In 1994, the leading client/server tool developer was purchased by a large database firm, raising concern about the manufacturer’s ability to continue to work cooperatively with RDBMS vendors, which compete with the parent company’s products. The number two-tool maker lost millions and has been labeled as a takeover target. A firm also in the midst of severe financial difficulties and management transition supplies the tool, which has received some of the brightest accolades in early 1995. This kind of volatility raises questions about the long-term viability of any proprietary tool an organization may commit to. All of this complicates implementation of two-tier systems – migration from one proprietary technology to another would require a firm to scrap much of its investment in application code since none of this code is portable from one tool to the next.
Three tier:- Most sophisticated Web based applications, which involve data entry, are based on a 3 tier client server architecture. The 3 tiers are
• The Client (Web Browser)
• The Web Server/Application Server
• The Database Server The tree tier architecture (Figure 3) attempts to overcome some of the limitations of the two-tier scheme by separating presentation, processing, and data into separate, distinct software entities (tiers). The same types of tools can be used for presentation as were used in a two-tier environment, however these tools are now dedicated to handling just the presentation. When the presentation client requires calculations or data access, a call is made to a middle tier functionality server. This tier can perform calculations or can make requests as a client to additional servers. The middle tier servers are typically coded in a highly portable, non-proprietary language such as C. Middle-tier functionality servers may be multi-threaded and can be accessed by multiple clients, even those from separate applications. Although three-tier systems can be implemented using a variety of technologies, the calling mechanism from client to server in such as system is most typically the remote procedure call or RPC. Since the bulk of two-tier implementations involve SQL messaging and most three-tier systems utilize RPCs, it is reasonable to examine the merits of these respective request/response mechanisms in a discussion of architectures. RPC calls from presentation client to middle-tier server provide greater overall system flexibility than the SQL calls made by clients in the two-tier architecture. This is because in an RPC, the requesting client simply passes parameters needed for the request and specifies a data structure to accept returned values (if any). Unlike most two-tier implementations, the three-tier presentation client is not required to “speak” SQL. As such, the organization, names, or even the overall structure of the back-end data can be changed without requiring changes to PC-based presentation clients. Since SQL is no longer required, data can be organized hierarchically, relationally, or in object format. This added flexibility can allow a firm to access legacy data and simplifies the introduction of new database technologies.
Figure 3 – Three-Tier Architecture. Functionality servers handle most of the logic processing. Middle-tier code can be accessed and utilized by multiple clients In addition to the openness stated above, several other advantages are presented by this architecture. Having separate software entities can allow for the parallel development of individual tiers by application specialists. It should be noted that the skill sets required to develop c/s applications differ significantly from those needed to develop mainframe-based character systems. As examples, user interface creation requires an appreciation for platform and corporate UI standards and database design requires a commitment to and understanding of the enterprise’s data model. Having experts focus on each of these three layers can increase the overall quality of the final application. The three-tier architecture also provides for more flexible resource allocation. Middle-tier functionality servers are highly portable and can be dynamically allocated and shifted as the needs of the organization change. Network traffic can potentially be reduced by having functionality servers strip data to the precise structure required before distributing it to individual clients at the LAN level. Multiple server requests and complex data access can emanate from the middle tier instead of the client, further decreasing traffic. Also, since PC clients are now dedicated to just presentation, memory and disk storage requirements for PCs will potentially be reduced. Modularly designed middle tier code modules can be re-used by several applications. Reusable logic can reduce subsequent development efforts, minimize the maintenance workload, and decrease migration costs when switching client applications. In addition, implementation platforms for three tier systems such as OSF/DCE offer a variety of additional features to support distributed application development. These include integrated security, directory and naming services, server monitoring and boot capabilities for supporting dynamic fault-tolerance, and distributed time management for synchronizing systems across networks and separate time zones. There are of course drawbacks associated with a three-tier architecture. Current tools are relatively immature and require more complex 3GLs for middle tier server generation. Many tools have under-developed facilities for maintaining server libraries – a potential obstacle for simplifying maintenance and promoting code re-use throughout an IS organization. More code in more places also increases the likelihood that a system failure will effect an application so detailed planning with an emphasis on the reduction/elimination of critical-paths is essential. Three tiers brings with it an increased need for network traffic management, server load balancing, and fault tolerance. For technically strong IS organizations servicing customers with rapidly changing environments, three tier architectures can provide significant long-term gains via increased responsiveness to business climate changes, code reuse, maintainability, and ease of migration to new server platforms and development environments.
Comparing two and three tire development efforts:- The graphs in Figures 4-6 illustrate the time to deployment for two tiers vs. three tier environments. Time to deployment is forecast in overall systems delivery time, not man-hours. According to a Deloitte & Touche study, rapid application development time is cited as one of the primary reasons firms chose to migrate to client/server architecture. As such, strategic planning and platform decisions require an understanding how development time relates to architecture and how development time changes as an IS organization gains experience in c/s.
Figure 4 – Initial Development Effort Figure 4 shows the initial development effort forecast to create comparable distributed applications using the common two tier and three tier approaches discussed above. The three tier application takes much longer to develop – this is due primarily to the complexity involved in coding the bulk of the application logic in a lower-level 3GL such as C and the difficulties associated with coordinating multiple independent software modules on disparate platforms. In contrast, the two-tier scheme allows the bulk of the application logic to be developed in a higher-level language within the same tool used to create the user interface.
Figure 5 – Subsequent Development Efforts Subsequent development efforts may see three-tier applications deployed with greater speed than two tier systems (Figure 5). This is entirely due to the amount of middle-tier code, which can be re-used from previous applications. The speed advantage favoring the three-tier architecture will only result if the three-tier application is able to use a sizable portion of existing logic. Experience indicates that these savings can be significant, particularly in organizations, which require separate but closely related applications for various business units. Re-use is also high for organizations with a strong enterprise data model because data-access code can be written once and re-used whenever similar access needs arise across multiple applications. The degree of development time reduction on subsequent efforts will grow as an organization deploys more c/s applications and develops a significant library of re-usable, middle-tier application logic.
Figure 6 – Client Tool Migration Figure 6 makes the important case for code savings when migrating from one client development tool to another. It was stated earlier that client tools are highly proprietary and code is not portable between the major vendor packages. The point was also made that the PC tools market is highly volatile with vendor shakeouts and technical “leapfrogging” commonplace. In a two-tier environment, IS organizations wishing to move from one PC-based client development platform to another will have to scrap their previous investment in application logic since most of this logic is written in the language of the proprietary tool. In the three-tier environment this logic is written in a re-usable middle tier, thus when migrating to the new tool, the developer simply has to create the presentation and add RPC calls to the functionality layer. Flexibility in re-using existing middle-tier code can also assist organizations developing applications for various PC client operating system platforms. Until recently there were very few cross-platform client tool development environments and most of today’s cross-platform solutions are not considered “best-of-breed”. In a three-tier environment separate client tools on separate platforms can access the middle tier functionality layer. Coding application logic once in an accessible middle tier decreases the overall development time on the cross-platform solution and it provides the organization greater flexibility in choosing the best tool on any given platform.
The characteristics of client/server architecture:-
The basic characteristics of client/server architectures are:
1) Combination of a client or front-end portion that interacts with the user, and a server or back-end portion that interacts with the shared resource. The client process contains solution-specific logic and provides the interface between the user and the rest of the application system. The server process acts as a software engine that manages shared resources such as databases, printers, modems, or high-powered processors.
2) The front-end task and back-end task have fundamentally different requirements for computing resources such as processor speeds, memory, disk speeds and capacities, and input/output devices.
3) The environment is typically heterogeneous and multiFinder. The hardware platform and operating system of client and server are not usually the same. Client and server processes communicate through a well-defined set of standard application program interfaces (API’s) and RPC’s.
4) An important characteristic of client-server systems is scalability. They can be scaled horizontally or vertically. Horizontal scaling means adding or removing client workstations with only a slight performance impact. Vertical scaling means migrating to a larger and faster server machine or multiservers.
We define a client-server application as:
An application system in which logically separate software components are integrated together via client-server relationships.
In a client-server relationship, one part of an application (the client end) uses a service provided by the other part (the server end). The latter is often a shared resource, used by many clients. Although integrated together via the client-server relationship, the parts remain separate. We refer to them as being logically separate because they need not be physically remote from one another (they might be in the same computer). We describe client-server application software here in three steps: splitting an application, joining separate applications together, and distributed application structure
Splitting an application:-
Figure 5 Application software modularity There are many ways of partitioning application software into separate components. However, the content of most applications can usually be classified under three different technical headings: data management, application logic and presentation. This is illustrated in figure 5. If the application is to be split into two parts (one part on a client platform, the other on a server platform), the split can be made at either of the two boundaries between functions, or inside one of the three functions. Consequently there are five main ways of splitting a centralized or personal application into two parts between which there is a client-server relationship. This is the basis of the popular classification into five client-server styles, which is promoted by the Gartner Group. It is illustrated in figure 6.
Figure 6 Five generic styles of basic client-server structure
The details need not concern us here. The important point is that different styles suit different needs and circumstances:
• The two styles on the left of the diagram are typical of centralized interactive applications that have been adapted to client-server by means of graphical interface technology, terminal emulation, etc.
• The style in the middle of the diagram is typical of object-oriented distributed applications and distributed TP applications in which data and function are encapsulated together behind application interfaces
• The two styles on the right of the diagram are typical of data-centered applications using client-server 4GL development tools and relational database products Some applications combine all three areas of function (presentation, application logic and data management) at the personal platform. Also, different styles may occur in combination at the same platform.
Joining applications together:- One of the great strengths of client-server is the ability to join separate applications together. This can be done in many ways; but upon the principles used in 2.4.1, there are essentially three levels at which applications can interface with one another. This is illustrated in figure 7.
Figure 7 Three levels at which applications can be joined together
The main characteristics and advantages and disadvantages of these three approaches are:
• At presentation level: Interaction at this level is achieved via direct data exchange (DDE) within a window management system, or via scripting; see [Duxbury, 1994], in which software uses an application’s user interface by simulating a human user. This kind of technique is often referred to as screen scraping. It is very useful for accessing legacy applications, but leads to software maintenance problems if the user interfaces need to change.
• At application function level: Interaction at this level is in terms of business functions. Therefore, the inter-application requests are about the business meanings of the application (and not its presentation or database encoding). This has the advantage of keeping their internal designs separate from their external interactions. There are fewer software maintenance problems.
• At data management level: Interaction at this level is by direct access to the other application’s database. This is common practice, but leads to software maintenance problems when application data structures change.
The first and third approaches inhibit potential for change, the second does not. Further distinctions can be made between direct and indirect interaction between applications, synchronous and asynchronous interaction, and externally programmed interaction and internally programmed interaction.
Distributed application structure:- Distributed applications are evolving towards richly connected network structures of the kind illustrated in figure 8. The circles represent separate software components, and the lines represent client-server relationships between them. This is typical of the kind of structure that results from use of object-oriented design and distributed object management.
Figure 8 Complex distributed application There is also large-scale structure of distributed application systems (within which the individual client-server relationships occur). Typically, three tiers of application software can be discerned in the large-scale structure:
• Front tier: Application software (and databases) at personal platforms, providing all kinds of application services, using local resources and remote resources. Typically, the platforms are PCs. This tier is where the greatest amount of computer power and of new application software is now being deployed.
• Middle tier: Application software (and databases) at server platforms, providing the back-end of personal applications, shared workgroup services and task-oriented services. Typically, the platforms are UNIX or PC. This tier provides rapid adaptation to business process change, without needing changes to the back tier. It puts boundaries around the turbulence and uncertainty generated in the volatile world at the first tier, where all the users are. It also provides lateral linkage across the enterprise (e.g. electronic mail services).
• Back tier: Application software and databases at server platforms providing corporate information services. These are usually functionally partitioned (e.g. accounts, manufacturing, personnel). Typically, the platforms are mainframes. This tier provides the core of shared and long-lived information assets that everything else depends on. There are strong guarantees of data integrity, and the applications and databases are stable, and their design changes rather slowly.
This structure separates different kinds of concerns, which used to be bundled together in centralized computing
Important of client server:-
Advantages of Client-Server:-
ØPotential of reduced cost
ØMore GUI application
ØGives people the opportunity to make change for better
ØBetter SW development tools once established
ØExploits existing H/W, S/W configurations
ØMatches distributed business models
ØFlexibility and cost saving
ØFlexibility business modeling
ØMaximum technology component choice
ØEfficient use of computing resources
ØData interchangeability and interoperatability
ØEnhanced data sharing
ØSharing resources among devices platforms
ØLocation independence data and process
ØDisadvantages of Client-Server:-
ØHeavy up-front cost
ØInitial performance decline
ØLack of skilled professionals
ØNeed of rewrite a lot of software
ØNeed for retraining user
ØDependability- when the server goes down, operational cases
ØLack of mature tools
ØLack of the scalability-network operating system (e.g. novel NetWare, window NT server) are not very scalable
ØHigher then anticipated costs
ØHarder to build
ØSusceptible to network load
ØLacking in the specialists
ØDifficult to debug
ØDifficult to test
Client/Server Business Application Architectures:
Traditional applications architectures have been based on function today, to meet the needs of the business an application architecture should reflect the complete range of business requirements.
Therefore, client/server computing demands a three layer view of the
1 The user interface layer, which implements the functional model
2 The business function layer, which implements the process model
3 The data layer, which implements the information model It should be noted that this application architecture does not demand multiple hardware platforms, although such technology can be utilised,if the environment is robust and reliable enough and the business is prepared to pay the additional costs associated with workstation and LAN technology.
Business Benefits: – There is a perceived need for vendor independence. This includes application development methodologies, programming paradigms, products and architectures. – Organization have changed from steep hierarchies to flattened hierarchies – Network management is replacing vertical management – There is a change to team based management – The customer should have a single point of contact for all business with the organization – The customer should deal with the same person over multiple contacts. – The user will perform as much processing as possible during customer contact time – The time required to complete the work will be minimized – There is a need for empowerment of staff and audit trail of actions – Multi-skilled and multi-function teams need access to multiple applications
Different types of servers:-
The simplest form of servers are disk servers and file servers. With a file server, the client passes requests for files or file records over a network to the file server. This form of data service requires large bandwidth and can slow a network with many users down considerably. Traditional LAN computing allows users to share resources, such as data files and peripheral devices, by moving them from standalone PCUs onto a Networked File Server (NFS). The more advanced form of servers are database servers, transaction server and application servers (Orfali and Harkey 1992). In database servers, clients pass SQL (Structured Query Language) requests as messages to the server and the results of the query are returned over the network. The code that processes the SQL request and the data resides on the server allowing it to use its own processing power to find the requested data, rather than pass all the records back to a client and let it find its own
Data as was the case for the file server. In transaction servers, clients invoke remote procedures that reside on servers, which also contain an SQL database engine. There are procedural statements on the server to execute a group of SQL statements (transactions), which either all succeed or fail as a unit. The applications based on transaction servers are called On-line Transaction Processing (OLTP) and tend to be mission-critical applications, which require 1-3 second response time, 100% of the time and require tight controls over the security and integrity of the database. The communication overhead in this approach is kept to a minimum as the exchange typically consists of a single request/reply (as opposed to multiple SQL statements in database servers). Application servers are not necessarily database centered but are used to server user needs, such as.
Download capabilities from Dow Jones or regulating a electronic mail process. Basing resources on a server allows users to share data, while security and management services, which are also based in the server, ensure data integrity and security.
Special types of Architecture:
IBM’s System Application Architecture: SAA is a collection of selected software interfaces, conventions, and protocols that are used as a framework for developing consistent, integrated applications across the major IBM computing environments.Four major components of this architecture are: – Common User Access (CUA) defines conventions for GUI look and feel. – Common Programming Interface (CPI) provides languages, tools, and APIs that give applications greater portability and more consistent user interfaces across multiple platforms. – Common Communication Support (CCS) supports existing communications standards, such as LU 6.2. – Common Applications, written by IBM, will serve as demonstrations of SAA concepts and make it easy for users to migrate between systems.
APPLE’s VITAL Architecture:- VITAL provides a way of building information systems constructed from generalize
By: Mehta Ankit Chandrakant
About the Author: