Client-server architecture  

Client-server architecture  

Source publication
Article
Full-text available
Currently, virtualization solutions are employed in the vast majority of organizations around the world. The reasons for this are the benefits gained by the approach, focusing on increases in security, availability and data integrity. These privileges are also present in a new technique, which emerges from this same concept and is called desktop vi...

Contexts in source publication

Context 1
... technology is physically based on the use of client- server architecture; which along with the implementation of desktop virtualization sets personal computers on one or more physical machines. The following sketches ( Fig. 1 and 2) illustrate the operation of the technology, physically and logically, showing the various existing layers. Figure 1 indicates the basic functioning of a client- server architecture. ...
Context 2
... following sketches ( Fig. 1 and 2) illustrate the operation of the technology, physically and logically, showing the various existing layers. Figure 1 indicates the basic functioning of a client- server architecture. From it, one can extract that clients are totally dependent of two main things: one is the server that processes their requests and the other is the communication network that physically separates them. ...
Context 3
... (software), creating the main focus of this article, the Virtual Desktop. It is important to note that may or may not exist an operating system installed directly on the hardware that supports the VMM. That means it's possible to implement an intermediate software layer by installing an OS between the hardware and the hypervisor as illustrated on Fig. 2 (CITRIX, ...

Similar publications

Article
Full-text available
User-in-the-loop (UiL) content delivery is a recently proposed scheme for personalized content retrieval over mobile communication networks. It is a promising scheme that can better manage the overall user quality-of-experience (QoE) throughout the entire content retrieval process. The performance of the scheme, however, has only been investigated...

Citations

... It is also necessary to mention that this paper conducted the widest study, to the best of our knowledge correlated (Krishnan and Sitaraman, 2013), in the IP delivery field. It provides a tool to monitor 16 key metrics and segments it into 6 different scenarios, while providing the coding methods and a mathematical modelling to interpret results (Buyya et al., 2008;Oliveira et al., 2013a;2013b;Chai, 2014;Bedicks Jr., 2008;Cloonan and Allen, 2011;Edwards, 2013;Ellacott, 2014;Greenfield, 2012;Kaduoka, 2016;Katel, 2014;Statista, 2016;McMillan, 2013) Firstly, it was required to expand the functionalities of an existing software that monitors YouTube videos accessed via Google Chrome Browser for desktop in order to capture all the intended variables. It is worth mentioning that the authors of this article also developed the earlier version of this tool. ...
... In computer networks, that is, Internet protocol networks, Quality of Service is a set of technical conditions that permeate a qualitative operation for a given application that is based on an IP network to function (Oliveira et al., 2013a;Baldini et al., 2014;Tanenbaum and Wetherall, 2010). These requirements translate into specific QoS parameters in computer networks, such as throughput, latency, jitter and losses (Oliveira et al., 2013b;Khanafer et al., 2014). ...
Article
Full-text available
The perpetual rise of the video on demand is currently one of the leading challenges the telecommunications industry faces. It per passes the eternal comparison with a service that continuously set's the bar at a highly elevated consumer quality. And the user, advertiser and all stakeholders involved not only are used to it, but demand equal and/or similar value, i.e., Broadcast Television. Such dichotomy has made this relatively new medium create a long list of technologies to make this as viable as possible. However, the solutions only work to a certain extent and critical problems remain not yet addressed. One in particular is delivery assurance in Internet Protocol networks, which affects every stakeholder on these New Media outlets. Keeping this issue in mind, this work developed a range of experiment scenarios through a software-based apparatus in order to convey a technical assessment of key variables in this ecosystem. Afterwards, from these tests an analysis was conducted which brought about a series of discoveries in regards to the technology performance in terms of availability and audience. Ultimately, it culminated in one the central contributions of this research, that is, how to mathematically interpret this type of data indicating its statistical relations. Such methods unveiled impacts and feasibility of: Privacy protocols, mobile and landline connection, latency, delay, loading time, interactions volumes, content history, channel characteristics and user attributes. In other words, this work developed a tool to measure audience and availability in IP delivery. And, it also forged a method to interpret and model the measurements into statistical patterns that can provide predictability. Additionally, it is stated that the significance of this research was confirmed through the Law of Large Numbers, which showed that the data has statistical validity to interpret and envision behavior. That is, the work presents data with a reliability of 97% with a margin of error of 3.03% which confirms this, to the best of our knowledge, as the most comprehensive accurate study of this nature in comparison to the state of the art in the literature. It is also necessary to ascertain that the tool is restricted to measuring quantities related to each content displayed for the largest platform of online video and the most desktop utilized Web Browser, i.e., YouTube and Google Chrome. © 2018 Vitor Chaves de Oliveira, Sérgio Bimbi Junior andreiwid Sheffer Corrêa, Inácio Henrique Yano, Mauricio Becker, Paulo Batista Lopes and Gunnar Bedicks Junior.
... After checking the connection quality (Oliveira et al., 2013a;2013b), it becomes possible to perform a test to see if a given station is an offender or not (just in case the MAC Anomaly is installed). A new approach using a new mathematical calculation is being studied. ...
Article
IEEE 802.11 standard is largely used anywhere as a cheap way to access internet, but the majority of devices used does not provide a standard way to manage them. Mobile stations competing for access point may causes a general wireless network failure, known as the “MAC Anomaly”. It is presented here an access point with modified firmware, that monitors wireless conections´s quality and has a programmable capability allowing to do network management in a standardized manner, using SNMP for example. In this article, embedded Linux “OpenWRT” installed in a very cheap and reduced size hardware was used. By means of Bourne shell script programming, it is possible to collect all important operating parameters and data of the access point. From this, its possible to gain considerable control over it. IEEE 802.11 (Wi-Fi) access point devices are routers, while embedded Linux has routing capabilities. Thus, it is easy to implement traffic policies by means of Bourne shell script programming. Traffic shapping is one of the acces point´s capabilities successfuly tested and demonstrated in this study. MAC anomaly detection in IEEE 802.11 networks can be easily implemented by means of scripts as well. It was collected and plotted network throughput data, becoming possible to observe MAC anomaly in visual charts. A matemathic model to apply on the collected data of network throughput is under study, aiming to identify the anomaly through calculations. The object of the current study is to integrate everything: Measurements of operation data (network throughput in Mbit/s), identification of the MAC anomaly and its immediate mitigation. The result was the conception of an integrated device to measure, identify and mitigate MAC anomaly in a test field and therefore restore network throughput by representing a gain of 42.5216%. © 2015 Argemiro Bevilacqua, Vitor Chaves de Oliveira, Gunnar Bedicks Jr., Alexandre de Assis Mota and Lia Toledo Moreira Mota.
... Desktop virtualization [3][4] is one type of server based computing (SBC) which is technology that user accesses the virtual desktop machines like PC inside data center servers using local devices, usually thin client [5][6], and uses the data, operating system (OS), applications, and so on. Namely, this is a system that can establish dozens of computer in a single central server as a function of computer main body using the virtualization technology and can support work as similar as using the PC. ...
Article
It is possible for virtualization of desktop to dramatically reduce maintenance costs and improve the security using various virtualization techniques rather than previous desktop environments. Also, with blocking beforehand the information leakage caused by data centralization, it is easy to manage the information security. This desktop virtualization provides creation and duplication of data and standardized desktop environments using easy and fast virtualization works. So, it is possible to improve efficiency, stability, and fusibility of virtualization. In this paper, with the desktop virtualization, the power saving effects are obtained from 65,750(kW) to 7,300(kW) , which is from 480(w) to 50 (w) for using one desktop for 8 hours per a day. In addition, the 62 desktops and 62 monitors are combined to one operational server with 62 thin clients. As a result of this, the security is improved greatly by data centralization, which the user can access the main server as a thin client with given space.
... In Computer networks, i.e., Internet Protocol networks, Quality of service is an assemble of technical conditions that permeate a qualitative operation for a given application which relies on an IP network to function (Oliveira et al., 2013a;Baldini et al., 2014). These requirements translate into specific QoS parameters in computer networks, such as throughput, jitter, latency and losses (Oliveira et al., 2013b;Khanafer et al., 2014). ...
Article
Recent events have shown that content delivery networks as well as Broadband Providers are failing to provide a continuous service, especially on live video stream transmissions for numerous customers. This study aims to present a Methodology to uninterruptedly measure uplink-downlink of a given IP connection. Based on an open-source assemblage of development and data storage platforms it was programmed a software that automatically attends the proposed assessment. The significance of availability is effusively addressed on this article, since it is the first requirement regarding quality of service in any engineered communication. The proposed method relates to the fact that for a video and/or audio web stream to successfully occur, a connection with each end-user device needs to be sustained the entire time, establishing a complex two-way communication. Meanwhile, traditional cable and satellite broadcasts are less stressful one-way connections, demanding only that the enduser device to be placed within range of a radio signal. Given this scenario and adding the substantial demand increase for high quality media content from the internet, emerges an essential need to control the service delivered by CDNs and Broadband Providers. The developed software also creates a reasonable billing mechanism, which can function as a new technical milestone on contracts and/or Service Level Agreements. This tool also imposes a key function to the user's role, once it requires a setup of all seeked tests to be manually inserted, which might limit it for some routines. © 2015 The Vitor Chaves de Oliveira, Gunnar Bedicks Junior and Cristiano Akamine.
... The gradientbased method is intrinsically sequential and cannot exploit efficiently parallel architectures; on the other hand, the global optimization methods,in general,are easily parallelizable and can greatly benefit from Science Publications AJAS distributed architectures, which allow running several simulations simultaneously (Selberg et al., 2006). Current developments in multi-core processors allow parallelization of numerical codes and, as a consequence, speed up of the calculations (Oliveira et al., 2013). ...
Article
Full-text available
The exploitation strategy of hydrocarbon reservoirs can be technically and economically optimized only if a reliable numerical model of the reservoir under investigation is available to predict the system response for different production scenarios. A numerical model can be reasonably trustworthy after calibration only, which means the model has at least proved its ability to reproduce the historical behavior of the reservoir it represents. The calibration procedure, also known as history matching, is the most time consuming phase in a reservoir study workflow. Over the last decades several methods, classified as Assisted History Matching (AHM), have been proposed for a partial automation of the model calibration procedure. Meta-heuristic methods have been used to iteratively reduce the misfit between simulated and historical data. However, the main limit for the application of these algorithms is the amount of computational time necessary for the evaluation of the objective function, thus for the simulation runs. On the other hand, the new trend on collective computing offers a solution to CPU intensive tasks by distributing the work among several computers located in different places but globally connected through the World Wide Web. In this study a novel workflow for assisted history matching is proposed. The results proved that this workflow provides better and more representative solutions in a fraction of the time needed by traditional approaches.
... Given the rapid pace at which the Desktop Virtualization technique is being disseminated (CITRIX, 2011;Thibodeau, 2012;Oliveira et al., 2013), it is of fundamental importance to study its effects. Since, to design a network to support such solutions, metrics are necessary to outline their impacts. ...
Article
Full-text available
In recent years, virtualization computing has become a worldwide reality present in datacenters servers of most organizations. The motivations for the use of this solution are focused primarily on cost reduction and increases in availability, integrity and security of data. Based on these benefits, recently it was started the use of this technology for personal computers as well. That is, for desktops, giving birth to the so-called desktop virtualization. Given the technical advantages of the approach, its growth has been so significant that, before 2014, it is expected to be present in over 90% of organizations. However, this new method is completely based on a physical client-server architecture, which increases the importance of the communication network that makes this technique possible. Therefore, analyzing the network in order to investigate the effects according to the environment implemented becomes crucial. In this study it's varied the local's client hardware and the application, i.e., the service used. The purpose was to detail their effects on computer networks in a Quality of Service (QoS) parameter, throughput. Secondarily are outlined perceptions regarding the Quality of Experience (QoE)? This culminated in an assessment that traces the feasibility for applying this technology.
Article
Common problems in computer science and technology teaching using experiments in university are analysed, and the idea is proposed that virtualisation technology be employed to improve the existing experimental environment. Specific implementation plans of a virtualisation platform for an experimental teaching centre are provided. In the actual teaching process, the deployed virtualisation platform was tested and the effects of its application analysed. The results show that the virtualisation platform, if well operated, meets the demands of experimental teaching and dramatically increases laboratory utilisation.