Client-Server Architecture
The Database Environment
Jan L. Harrington , in Relational Database Design and Implementation (Fourth Edition), 2016
Client/Server
Client/server architecture shares the data processing chores between a server—typically, a high-end workstation but quite possibly a mainframe—and clients, which are usually PCs. PCs have significant processing power and therefore are capable of taking raw data returned by the server and formatting the result for output. Application programs and query processors can be stored and executed on the PCs. Network traffic is reduced to data manipulation requests sent from the PC to the database server and raw data returned as a result of that request. The result is significantly less network traffic and theoretically better performance.
Today's client/server architectures exchange messages over LANs. Although a few older Token Ring LANs are still in use, most of today's LANs are based on Ethernet standards. As an example, take a look at the small network in Figure 1.3. The database runs on its own server (the database server), using additional disk space on the network attached storage device. Access to the database is controlled not only by the DBMS itself, but by the authentication server.
A client/server architecture is similar to the traditional centralized architecture in that the DBMS resides on a single computer. In fact, many of today's mainframes actually function as large, fast servers. The need to handle large data sets still exists although the location of some of the processing has changed.
Because a client/server architecture uses a centralized database server, it suffers from the same reliability problems as the traditional centralized architecture: if the server goes down, data access is cut off. However, because the "terminals" are PCs, any data downloaded to a PC can be processed without access to the server.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128043998000016
Human Resource Information Systems
Michael Bedell , in Encyclopedia of Information Systems, 2003
1 Clien–server Architecture
The client–server architecture is a distributed computing system where tasks are split between software on the server computer and the client computer. The client computer requests information from the server, initiating activity or requesting information. This is comparable to a customer who orders materials from a supplier who responds to the request by shipping the requested materials. A strength of this architecture is that distributed computing resources on a network share resources, in this case a single database, among many users. Another strength of this architecture is that additional hardware can be easily added to increase computing power. In the case of the HRIS, the HR professional uses their client computer to request information appropriate to their security clearance from the server. The HRIS server computer houses the database which contains the organization's data.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122272404000861
Choosing the Right Software
Anthony C. Caputo , in Digital Video Surveillance and Security, 2010
Troubleshooting
The VMS is a client/server architecture and follows many of the same rules as any other client/server application with the possible exception of more granular software permissions and privileges options.
As with any troubleshooting, as depicted in Figure 7-10, you must first confirm there's power throughout the topology. If there are two switches and a router between the workstation and the server, make sure everything is powered and running. A simple ping test from the workstation to the server can verify clear connectivity, and if that fails, then ping the router and/or firewall. If that fails, then check the network connection at the workstation and any switch in between.
A PATHPING can also provide a network path from the workstation to the server when there are any changes made to the topology, such as switching to another router and/or firewall, that haven't been properly configured for the VMS system.
Depending on the complexity of the permissions and privileges, a quick diagnosis of authentication problems may be to just log in as another known-good account. If that's successful, then there may either be a mistakenly deleted account ID (it's happened) or the permissions were changed, making it impossible for that specific ID to log in to the system. Following the network troubleshooting suggestions in Chapter 4 will also assist in uncovering the problem.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978185617747400007X
Choosing the Right Software
Anthony C. Caputo , in Digital Video Surveillance and Security (Second Edition), 2014
Troubleshooting
The VMS is a client/server architecture and follows many of the same rules as any other client/server application, with the possible exception of more granular software permissions and privileges options.
As with any troubleshooting activity, as depicted in Figure 8.16, you must first confirm that there is connectivity between the client software and the server itself. The easiest method for determining successful connectivity is if the VMS Server software prompts for a user ID and password. This means that the server software "heard" the request from the client application. If there is no login prompt, power throughout the topology. If there are two switches and a router between the workstation and the server, make sure everything is powered and running. A simple ping test from the workstation to the server can verify clear connectivity, or if that fails, then ping the router and/or firewall. If that fails, then check the network connection at the workstation and any switch in between.
A pathping can also provide a network path from the workstation to the server in the event that any changes have been made to the topology, such as switching to another router or firewall, that might not have been properly configured for the VMS.
Depending on the complexity of the permissions and privileges, a quick diagnosis of authentication problems may be to simply log in as another known good account. If that's successful, then there may either be a mistakenly deleted account ID (it has happened) or the permissions were changed, making it impossible for that specific ID to log into the system. Following the network troubleshooting suggestions in Chapter 4 will also assist you in uncovering the problem.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124200425000083
Literature Review
Oluwatobi Ayodeji Akanbi , ... Elahe Fazeldehkordi , in A Machine-Learning Approach to Phishing Detection and Defense, 2015
2.5.2 Lookup System
Lookup systems implements a client–server architecture such that the server side maintains a blacklist of known fake URLs (Li and Helenius, 2007; Zhang et al., 2006) and the client–side tool examines the blacklist and provides a warning if a website poses a threat. Lookup systems utilize collective sanctioning mechanisms akin to those in reputation ranking mechanisms (Hariharan et al., 2007). Online communities of practice and system users provide information for the blacklists. Online communities such as the Anti-Phishing Working Group and the Artists Against 4-1-9 have developed databases of known concocted and spoof websites. Lookup systems also study URLs directly reported or assessed by system users (Abbasi and Chen, 2009b).
There are several lookup systems in existence and perhaps, the most common of which is Microsoft's IE Phishing Filter, which makes use of a client-side whitelist combined with a server-side blacklist gathered from online databases and IE user reports. Similarly, Mozilla Firefox's FirePhish7 toolbar, and the EarthLink9 toolbar also maintain a blacklist of spoof URLs. Firetrust's9Sitehound system stores spoof and concocted site URLs taken from online sources such as the Artists6Against 4-1-9. A benefit of lookup9 systems is that they characteristically have high measure of accuracy as they are less likely to detect an authentic site as phony (Zhang et al., 2006). They are also simpler to work with and faster than most classifier systems in terms of computational power; comparing URLs against a list of identified phonies is rather simple. In spite of this, lookup systems are still vulnerable to higher levels of false negatives in failing to identify fake websites. Also, one of the limitations of blacklist can be attributed to the small number of available online resources and coverage area. For example, the IE Phishing Filter and FirePhish tools only amass URLs for spoof sites, making them incompetent against concocted sites (Abbasi and Chen, 2009b). The performance of lookup systems might also vary on the basis of the time of day and interval between report and evaluation time (Zhang et al., 2006). However, blacklists are to contain older fake websites rather than newer ones, which give impostors a better chance of successive attack before being blacklisted. Furthermore, Liu et al. (2006) claimed that 5% of spoof site recipients become victims in spite of the availability of a profusion of web browser integrated lookup systems.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128029275000022
Server Classifications
Shu Zhang , Ming Wang , in Encyclopedia of Information Systems, 2003
I.B.2.b. Three-tier Client/server Architecture
The need for enterprise scalability challenged the traditional two-tier client/server architecture. In the mid-1990s, as applications became more complex and potentially could be deployed to hundreds or thousands of end-users, the client side presented the problems that prevented true scalability. Because two-tier client/server applications are not optimized for WAN connections, response time is often unacceptable for remote users. Application upgrades require software and often hardware upgrades on all client PCs, resulting in potential version control problems.
By 1995, three new layers of client/server architecture were proposed, each running on a different platform:
- 1.
-
Tier one is the user interface layer, which runs on the end-user's computer.
- 2.
-
Tier two is the business logic and data processing layer. This middle tier runs on a server and is often called the application server. This added middle layer is called an application server.
- 3.
-
Tier three is the data storage system, which stores the data required by the middle tier. This tier may run on a separate server called the database server. This third layer is called the back-end server.
In a three-tier application, the user interface processes remain on the client's computers, but the business rules processes are resided and executed on the application middle layer between the client's computer and the computer which hosts the data storage/ retrieval system. One application server is designed to serve multiple clients. In this type of application, the client would never access the data storage system directly.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B012227240400157X
Next-generation Internet architecture
Dimitrios Serpanos , Tilman Wolf , in Architecture of Network Systems, 2011
New networking paradigms
The trend toward heterogeneity in systems connected via the Internet has impacted the diversities of networking paradigms used in the network. While traditional communication principles based on client–server architecture still dominate the Internet, there are several other approaches on how to distribute information across the network. Some of them, for example, peer-to-peer networking, can be implemented in the existing networking infrastructure. However, others require a fundamental change in the way the network operates. The latter requires changes in the network that cannot be accommodated in the current Internet, but need to be considered for future Internet architectures.
Examples of communication paradigms that differ from client–server architectures are:
- •
-
Peer-to-peer networking: Peer-to-peer (P2P) networks combine the roles of the client and the server in each of the peer nodes [134]. Figure 15-1 shows client–server communication, and Figure 15-2 shows peer-to-peer communication in contrast. Instead of distributing information from a single centralized server, all peers participate in acting as servers to which other peers can connect. Using appropriate control information, a peer can determine which other peer to connect to in order to obtain a certain piece of information. This P2P communication can be implemented using existing networking technology, as it simply requires changes to the end-system application.
- •
-
Content delivery networks: Content delivery networks aim to push content from the source (i.e., server) toward potential users (i.e., clients) instead of waiting for clients to explicitly pull content. Figure 15-3 shows this process. The proactive distribution of content allows clients to access copies of content located closer. Thus, better access performance can be achieved than when accessing the original server. The use of content distribution requires that the network supports mechanisms that allow redirection of a request to a local copy. In practice, this type of anycast can be achieved by manipulating DNS entries as described in Hardie [61].
- •
-
Information fusion in sensor networks: Many sensor networks consist of low-power wireless sensors that monitor physical properties of the environment [152]. These sensor networks communicate using wireless ad hoc networks that do not provide continuous connectivity. Sensing results are transmitted between neighbors for further relay, as illustrated in Figure 15-4. In applications where data collected by multiple sensors can be aggregated, information fusion is employed (e.g., to determine the maximum observed temperature, a node can aggregate all the available thermal sensor results and compute the results of the maximum function). In such networks, access to data is considerably different from conventional client–server architectures, as direct access to the source of data (i.e., the sensor) is not possible.
- •
-
Delay-tolerant networking: Delay-tolerant networks consist of network systems that cannot provide continuous connectivity [51]. Application domains include vehicular networks, where vehicles may be disconnected for some time, and mobile ad hoc networks in general. Protocols used in delay-tolerant networks are typically based on a store-and-forward approach, as it cannot be assumed that a complete end-to-end path can be established for conventional communication. Figure 15-5 shows this type of communication. The requirement for nodes to store data for potentially considerable amounts of time requires fundamental changes in the functionality of network systems.
These new communication paradigms shift the fundamental requirements of the network architecture.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123744944000153
Platform Architecture
Amrit Tiwana , in Platform Ecosystems, 2014
5.4.1.6 Client–server microarchitecture
The fourth widely used app microarchitecture is client–server architecture (Figure 5.14). This arrangement evenly splits the four functional elements of an app between clients and servers. While data access logic and data storage reside on the server side, presentation and application logic reside on the client side. In practice, the application logic is often split between the client and server, although it predominantly resides on the client. This design balances processing demands on the server by having the client do the bulk of application logic and presentation. It also reduces the network intensity of an app by limiting the data flowing over the Internet to only that which is needed by the user. Placing the data access logic on the server side accomplishes this; the queries from the client are initiated from the client but executed by the server, which only sends back the results of those queries rather than the entire raw data as client-based microarchitectures do. The downside of client–server app microarchitectures is that different types of client devices must be designed to invoke the data access logic on the server side in compatible ways. Depending on how the server-side functionality is split between an app and the platform, this arrangement can potentially free up app developers' attention to focus their attention on developing the core functionality of the app (where most end-user value is generated) and fret less about the data management aspects of the app.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124080669000059
Cloud Infrastructure Servers
Caesar Wu , Rajkumar Buyya , in Cloud Data Centers and Cost Modeling, 2015
11.1 Cloud Servers
From Figure 11.1, we note that the first physical infrastructure item is the server is within the second layer. It is not only one of the critical components of a cloud, but also the most expensive item in the cloud cost framework. A Microsoft research report [116] indicated that the amortized cost of servers is as high as 45% of the total capex of a cloud data center.
When people are talking about server virtualization or cloud computing, we often hear a lot of technical jargon, such as x86, RISC, bare metal, guest OS, host, instance, workload, cluster, server farm, node, vMotion, vSwtich, hypervisor, etc. Many people, especially people who come from a non-IT background, such as finance, are confused by this technical jargon in comparison with traditional computing terms in a nonvirtualized environment. You might wonder why we have to bother to know this jargon. The simple answer is because a lot of the jargon represents the units to measure the virtualized or cloud infrastructure. As we have indicated before, the cloud is not a simple one-to-one relationship. In order to better understand the cost of the cloud, we have to understand this jargon and its meaning first.
Most cloud computing books only mean x86 servers when they are discussing servers. There are only a few books or papers that have focused on the topic of RISC servers (basically UNIX). One of the main reasons is that x86 servers have almost caught up with RISC server performance but the cost is often significantly lower. Based on IT Candor research [182], the revenue of the RISC server market has dropped nearly 50% from 2003 to 2014 (see Figure 11.2).
However, in this chapter, we will not only cover x86 servers but also discuss RISC servers, particularly Oracle/Sun SPARC servers. The reason to cover the topic of RISC servers is because many large enterprises and government organizations have a certain amount of Sun SPARC servers in their server fleets. For some telco companies, the number of SPARC servers may be more than the number of x86 servers in their data centers. It is absolutely critical for these companies to have the right cloud strategy and cost model to deal with those SPARC servers.
Now, the first questions we face are: What is a server and what is a client? Why do we have a server, not a mainframe or PC? How many server types are there? To answer these questions, we should take a close look of the history of computer or server evolution in the last 60 years (see Figure 11.3).
As we can see in Figure 11.3 (this Figure appears to be similar to Figures 1.2 and 1.3 Figure 1.2 Figure 1.3 but each figure has a different emphasis), the history of computing has gone through four or five eras. Beginning in the early 1950s through to the 1970s was the mainframe era. The computer was a heavy and huge colossus that occupied a large amount of space. People had to personally go to a data center to get access to a computer or mainframe. The usage time had to be precisely scheduled. It was not only very costly but also very inconvenient. For some large calculations, such as finite element analysis or numerical computation, people had to work around the clock 24x7 to get a satisfactory result. During this period, the IBM mainframe took over 70% of the world's computer business. This period was an era of centralized computing.
Beginning the late 1960s, and through the 1980s, in order to make computers much more accessible for many ordinary businesses, IT professionals were trying to make computers lighter, smaller, and portable. It was the era of PCs and workstations. The most famous PC was the Apple I in 1976 and IBM compatible PC (or IBM 5100) in September 1975. The most popular workstation vendors were Digital Equipment Corporation (DEC), Sun and Apollo, HP, Silicon Graphics International (SGI), and IBM. In comparison with mainframes, the workstation computer was much cheaper and more affordable for many ordinary companies. It successfully ran many applications, especially office applications, project management, 3D graphic displays, desktop publisher (DTP), and computer-aided design/computer-aid manufacturing/computer-aided engineering (CAD/CAM/CAE, such as AutoCAD and CATIA).
For most of large CAD/CAM projects, team collaboration is essential. Subsequently, workstation computers were required networking. Actually, Apollo initially ran the Aegis operation system but that was later replaced with the Distributed On-line Multi-access Interactive Network/Operating System (Domain/OS).
It has a proprietary token-ring network feature that can support relatively small networks up to dozens of workstation computers in a typical office environment. It was an elegant network design that gave a certain degree of network transparency but it could not inter-operate with any other existing network hardware and software. The IT industry went on to adopt Ethernet and TCP/IP. In the early 1990s, when the Internet started become widespread, the client/server architecture became a better solution for many organiaations and companies to utilize computer resources because it is accessible, affordable, open source, and cost effective (see Figure 11.4). Due to improvements in network connectivity, it also became increasingly reliable.
One of the reasons client/server became so popular is because of its open platform or system. An open system defines a series of formal standards that allow different vendors to support it. In contrast to the mainframe computer, a customer has very limited choices or bargaining power in terms of price. Now, customers could purchase any hardware and software from different vendors. Open system allows for "mix and match" (MnM) or "plug and play" (PnP).
Because of the open system, software has been slowly separated from hardware. And many software companies have come to dominate the computer industry. Software defines everything.
One of the most successful software companies throughout the client/server era has been Microsoft. Since July 27, 1993, Microsoft has been continuously releasing new versions of its server operating system every two to three years from Windows NT 3.1 to Windows 2012R2. Through October 18, 2013, it has released 10 versions of the Windows server OS. It dominates in server OS marketshare (see Figure 11.5).
11.1.1 A Client/Server Architecture
From Figure 11.4, the mechanisms of the client/Server architecture is quite easy to understand. The software or application installed in a client machine (a PC or desktop or laptop computer) is the front end of the application. It manages local client resources, such as the monitor, keyboard, mouse, RAM, CPU, and other peripherals. If we replace it with a virtualized infrastructure, the remote virtual desk infrastructure (VDI), it will become a cloud VDI.
In comparison with mainframe terminals, the client is no longer a dumb machine. It has become a more powerful PC because it has its own computational environment. The client can be considered a customer who requests services. A server is similar to a service provider who serves many clients.
At the other side of client/server architecture is a server. The function of the server machine is to fulfill. all client requests. This means that the server-provided services can be shared among different clients. The server (or servers) is normally in a centralized location, namely a data center, but we also call it a server room or network room or LAN room or wiring closet or network storage room. It is really dependent on the size of the server fleet. The connection between clients and servers is via either a dedicated network or the Internet. Theoretically speaking, all servers are totally transparent to clients. The communication between client and server is based on standard protocols, such as Ethernet or TCP/IP. Once the clients initiate the service requests, the server will respond and execute these requests, such as data retrieval, updating, dispatching, storing, and deleting. Different servers can offer different services. A file server provides file system services, such as document, photo, music, and video files. A web server provides web content services. A storage server hosts storage services with different service levels, such as platinum, gold, silver, bronze, backup, and archive storage services. An application server supports application services, such as office applications or email. In addition, a server can also act as a software engine that manages shared resources such as databases, printers, network connectivity, or even the CPU. The main function of a server is to perform back-end tasks (see Figure 11.6).
Among these different types of servers, the simplest server is the storage server. With a file server, such as an FTP (File Transfer Protocol) or SMB/CIFS (Server Message Block/Common Internet File System) or NFS (Network File System, Sun) server, a client may request a file or files over the Internet or LAN. It is dependent on the file size. If the size is very large, the client's request needs a large amount of network bandwidth. This will drag the network speed down. Database, transaction, and applications servers are more sophisticated.
Since the 1980s, the number of hosting servers has been growing exponentially (see Figure 11.7). When ISC begun to survey host count in January 1993, the number of hosts was only 1,313,000; however in July 2013, the host count reached 996,230,757. It has increased almost 760 times, but the average hosting server's utilization rate is much lower. Based on Gartner's report in November 2008, the average utilization rate of x86 servers for most organizations was only between 7% and 15% [185]. Our experiences indicate that for some mobile content servers, the utilization rate is even below 1%. This has led to server consolidation by leveraging dramatic improvement from virtualization technology. In essence, Internet technology and service sparked the acceleration of host server growth and growing server volume has triggered server virtualization, which lays the basic infrastructure foundation for a cloud.
Now that we have clarified the concepts of client, server, and client/server architecture, in the following sections, we will take a close look at both x86 and RISC servers.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128014134000118
Reality Check
BERTHOLD DAUM , in Modeling Business Objects with XML Schema, 2003
11.1 MOTIVATION
Relational databases became a strategic data storage technology when client-server architectures in enterprises emerged. With different clients requiring different views on the same data, the ability to freely construct complex data structures from the simplest data atoms was crucial—a requirement that the classical hierarchical database systems could not fulfill. Relational technology made the enterprise data view possible, with one (big) schema describing the information model of the whole enterprise. Thus, each relational schema defines one ontology, one Universe of Discourse.
And this is the problem. Most enterprises cannot afford to be a data island anymore. Electronic business, company mergers, collaborations such as automated supply chains or virtual enterprises require that information can be exchanged between enterprises and that the cost of conversion is low. This is not the case if conversion happens only on a bilateral level, starting from scratch with every new partner.
XML represents a way to avoid this chaos. Because of its extensibility, XML allows the use of generic, pivot formats for various proprietary company formats. Usually business groups and associations define these formats. If such a format does not satisfy the needs of a specific partner completely, it is relatively easy to remedy by dialecting—extending the generic format with additional specific elements. This is why the integration of XML with relational technology is all important for enterprise scenarios. We will see that XML Schema has been designed very carefully with this goal in mind.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781558608160500136
Source: https://www.sciencedirect.com/topics/computer-science/client-server-architecture
Posted by: olgaolgasrsene0273631.blogspot.com
0 Comments