- Mahatma Gandhi
If you have experience with other application environments-- like desktop or client-server applications-- you may see how the approach of web applications is different. The most noteworthy differences involve the stateless protocol of the web, the need for a web browser to be present to interpret server responses; and generally anonymous access that web servers grant to its users via their clients.
In short, web applications have amnesia. By default, they do not keep track of users and their previous actions. It’s true that clients request are logged. And its true, a web server can use environment variables to identify users. .Net with it's "postback" apparatus simulates state, but is so much smoke and mirrors. Other application environments maintain an active relationship with their clients. With web servers and their clients, the interactions are encapsulated in the client request-server response model. When a server has responded to a client, the relationship is over.
Some client-server relationships come through a dedicated, proprietary interface (e.g. a SQL Server management console or a chat program like Skype or ICQ), whether that be a full-fledged application or the server output to a command line interface. With the web, the web server is largely ignorant of the client software in use. If a client generates a well formed request, the web server is happy. Servers do track what web browser—what user-agent-- is making a request and applications can respond differently by reading those environment variables. But, web browsers will serve out responses regardless of what the client software made the request.
Anonymity is one key ingredient to what has made the web so popular. Without so much as a library card, users can grab content from any publicly accessible web server. The bias of the world wide web is shifted in favor of distributing content. Other distributed data (like data available through a telnet session or a bulletin board) is often gated by a password or is otherwise limited to authorized users. Before the popularity of the web, distributed access was for the purpose of giving authorized users admittance while barring the unwashed masses. This world wide welcome mat that is the web asks for trouble in the form of traffic overload and security challenges. Because any number of people can show up at the doorstep of your website, traffic and tolerance and security is of great concern.
Spelling out these differences is the first step in seeing the pitfalls and advantages that come from using the Web. If you are coming from programming in a desktop environment there are some concepts that get turned on their head. Coming from a non-web client-server environment, you will find a lot of problems that you didn't have to consider before.
Traditional developers moving into web application development may take a few things for granted. For instance, the client-server relationship makes it difficult to maintain state between a user and the website they are using. Unlike a desktop application or even a client-server application on a LAN, data is sent out from the user into the world and is open prey unless encrypted or otherwise protected.
In short, the communication across the web is a series of requests and responses. Web pages, file transfers, emails-- it all boils down to requests sent by the web clients to web servers that respond with output destined for the web client. The web server may do a hundred things when prompted by the request sent into a web application, but in the end, it always sends back a response. There is no constant connection between the web client and the server. (Some protocols like FTP do ensure that the connection is open and rechecks it as needed.) The interaction between the client and the web server application is dormant when it is not actively in use -- even dormant in between the series of requests and responses that stream across the Internet. This intermittent contact is not a bad thing. In those spaces between your signals, everyone else's signals can nudge in and get their space in the perpetual rush hour traffic that is the Internet.
In those spaces between your signals, everyone else's signals can nudge in and get their space in the perpetual rush hour traffic that is the Internet. Table 1-1 lists the strengths and weaknesses of a web application.
More Than Just HTML
Most of the content considered on the web is HTML content. When we move away from web browsers, we find an array of other uses for web-delivered output. Today, the Internet is used to connect remote sensing equipment, perform financial and information transactions, allow online gaming and meet real time communication needs. The list of functions the Internet carries out is continually growing. You may see a need to distill data into a web browser or customized client application. How you do this, is your choice. Data can be stored in a relational database and that data can be exhumed for its historical value to identify trends. If there isn’t the long term value to data storage, it can be included as static content included on a web page and remain only until the next update of the page, whether that be an hour, a day or a week.
Versus Desktop Apps
There are significant differences between a desktop application and a web application in distribution, such as, platform dependence, horsepower, bug fixes and dependability. Table 1-1 provides a quick comparison.
Table 1-1: Strengths and weaknesses
Strengths | Weaknesses |
|
|
Purchase Versus Utility
The difference between a desktop application and a web delivered application is like the difference between a purchase and tapping into a utility. I once asked a comic book store owner why he wasn’t as worried about his comics being stolen as his paperbacks. He replied that the shoplifter who steals a comic book run the risk of being caught, getting banned from the store and not getting to buy the next issues of their favorite comic book (oh, and a criminal record which took a back seat to missing out on issues #125 of the X-Men). In much the same way, web services are like this too. They are only of value for the duration of their availability. No matter how much a user spends to gain access, a web site and/or a web application is a utility. The taps can get shut off and all of the perceived value will disappear in a manner of minutes. If your favorite desktop application developer evaporates, you still have your product.
Desktop applications are developed, tested, perfected (theoretically), and distributed either through retail sales and distribution, or via electronic distribution. Web applications are developed, tested, and perfected then the web site is opened for use. Customers use their Internet connection to send information from their client machine to the web server for processing. The server does its work and communicates back to the client. Most of the web based application products run through the client web browser, while the remainder (things like ICQ or Gnutella based applications) are custom interfaces built as desktop applications that rely on the web application for their functionality.
The utility model of web applications gives users independence. They get up from their computer with a scrap of paper holding their login-password, go anywhere else and tap into the same utility. The problem is that this just a web application vendor on a 24x7x365 treadmill. If they trip, their users fall. This means that web site health is duct-taped to user confidence. I have seen web sites that have had hiccups and lose half of their users in a day. It's like a store where the air conditioner is broken and half of the customer base swears off on the place.
Platform Independence
Platform dependence is overcome through web applications. It is true that the web application itself needs to be developed for the server platform it will reside upon, but the client's can be completely heterogeneous. A user on an old Mac, a brand new Windows machine or even a wireless device has the same potential to make use of a server based application. The main limitation is whether or not that client machine has a web browser application capable of connecting to the Internet. While desktop software developers angst over making a Mac distribution and a Linux distribution, web applications can serve all of the different platforms at once. Web browsers homogenize the interface between the client and the server and factor out the platform. If the application does have a client component -- for example JavaScript -- some attention may have to be paid to the client platforms, but server side browser/platform detection could lessen that impact by serving out client side code that is friendlier to a certain platform. In general, there is far less work involved in making a web application that may have to send out several chunks of client specific code than in recreating an application for several platforms (including PCs, thin clients and wireless devices).
Desktop applications outstrip web applications in realized processing power. A needy desktop application can suck up every last iota of memory and virtual memory before it runs out. A web application with one client will have far more power than what is available on the average desktop machine but, with two clients it is halved; with four, it's quartered. A popular web application either has to be a processing juggernaut or have very svelte applications. Applications that perform resource intensive functions, like graphics processing or real time analysis, are comparatively rare and may continue to be for some time. However, there is some merit to turning the server-client relationship upside down by using a multitude of clients as the processors while the web server merely orchestrates the client activities. This is how the SETI At Home program is getting some of their signal processing performed. They have handed out applications that run on desktop PCs. This application sits in the background using free processing cycles. When a block of signals are processed, the application posts the results to the SETI server and requests a new block for analysis.
Perpetual Technical Support
With the pace of the IT world today, many desktop applications are given only essential, bare minimum quality assurance and testing-- then they're out the door. This turns consumers and clients into beta testers. When bugs are discovered, bug patches are released to the software holders: sometimes as gratis fixes; sometimes as new versions (sure Blah '98 has horrible-- Blah '03 is better, honest, really). If communications break down between customers and software producers, the customers can be left in the cold with a faulty product. On the other hand, web applications can be fixed on the fly. If the bug is a showstopper, that means that the product is useless for all of the web application clients until it's repaired. In that respect, bug fixes on web applications move faster than they do on an equally popular desktop product.
Desktop apps also beat out web apps in dependability. Barring bugs, you can start up a desktop app every time you sit at your computer. They are platform and OS dependant, but many software developers try to release applications for the popular operating systems (Win32, Mac then Linux). For a web application, your Internet connection has to be up, as does the ISP and all of the systems in between the client and the server. If any link this chain is broken a served-based web application is useless. In addition, there are many desktop applications still in use today, even though the software companies that created them either closed their doors or discontinued support years ago. When a web site becomes extinct, so do its web applications. That’s the down side and countered with the idea that a web application has to work and has to satisfy its users. If it fails, the users will vote with their feet. If the users have a vested interest in the application’s success (like the users of a corporate Intranet), the problem will not go away until its resolved.
Maintaining State
A web server is an amnesiac. This can't be stressed enough. Imagine if your desktop application had the same capacity as client contact to a web server? Every time you moved your mouse, the cursor would snap back to the middle of the screen an instant later. By default, web servers and web applications do not maintain state with their clients. They are handed a URL; they process it and serve out a response. They do log the client requests and certainly can store data in a variety of ways, but each time a web application is used, it restarts anew and resets all of its variables.
This is not a brick wall: it's a hurdle. If you opened up your word processor, typed in a few words, looked to see if it caught all of the typos and then closed the application, you would be faced with exactly the same dynamic. Your word processor will also lose all its unsaved data. In order to clear this hurdle, we need to consider two types of web applications. The first is purely for output to the client. If the web application has the sole job of serving out content, then that is what it will do. A good example of this is a CGI script that gives you a random quote: it finds the quotes, chooses one at random, serves it out, forgets which one it just served out and then the application ends. The second type has an ongoing relationship with the user and has to store data from input to output to the next input again. There are thousands of examples of web applications that maintain state (or at least try to retain some memory). A password-protected area is a good example. Either through cookies, detecting the IP address of the user, or creating a "Session ID", the server retains a memory of a web client and applies that memory with each interaction with that client. Should that connection end or should the cookie expire, the web server will lose state again and the user must again login, or, otherwise identify himself.
Dealing With Input
Input has two sides to it. On the one side, the web server needs input in the form of URLs to process and query, strings to parse and environment variables to calculate. Without input the server hums away doing nothing. The other side of input is malicious or malformed input. With a desktop application, if you put in data that hasn't been considered, the application will freeze the computer, throw back garbage or crash the application. This is the same fallout you will find with a web application, except in this case, if one person crashes the application or freezes the server, everyone who comes to the server will suffer. Users don't have a vested interest in keeping the server running. They also don't have a means of resetting the server every time it coughs and sputters. If their actions cause problems for the server, it’s the Webmaster's problem, not theirs. Because of that reality, you have to consider security and risk limitations from the very inception of building a web application. A good example of just how easy it is to foul up a web server is this line of code:
rm -fr /
By itself it's harmless. But, if this line of code makes it through your script and ends up as printed as a literal:
print `rm -fr /`;
it will wipe out everything on the Unix/Linux server. Ouch. Every platform has its Achilles heel; even Unix/Linux is vulnerable. If you allow users' data to get to a print statement like this, it will wreak havoc.
Another gaping hole that used to exist on Microsoft servers exploited IIS and how it served out Active Server Pages. For example: the URL http://www.dewolfe.bc.ca/signup.asp served out the HTML of the ASP as it was supposed to - all of the server side code parsed out and everything was hunky-dory. This URL, http://www.dewolfe.bc.ca/signup.asp::$DATA instead sent out ASP unparsed - all of its inner workings available for download and exploitation. The "::$DATA " part of the URL exposed all of the applications to hackers. This is an example of an input hole that a web developer didn't create but did have to patch.
Input can come from several sources, namely form input and environment variables. Collected data amounts to data pulled from a database (relational or flat file); and data retrieved from automated clients. Data collected in that means is covered in our database section (see Page xx).:
Form Input
Overt input from form information is almost always supplied actively by the users. This is also the most common point where bogus data and security breaches can sneak in. Later in this book, we will cover form input and how it can be checked and to what extent it can be trusted. Below, we discuss the three principle means of introducing client requests to the server.
GET, POST and HEAD Methods
These methods are requests: the web client requests the information from the server. That request is the input.
The GET data appends the URL that is sent to the server. For example: http://www.dewolfe.bc.ca/input.asp?name=Mike&favorite+color=blue is a URL with data sent via the GET method. Everything after the question mark has been URL encoded so that it can survive the trip to the server. In other words, all of the illegal characters (spaces, exclamation points, question marks, etc.) are converted into other characters. When received by the web application, they are decoded. Sometimes the language does this automatically, as with PHP, ASP and others. Other times it has to be converted "manually" with the help of a subroutine, as is the case with Perl when you opt not to use the CGI.pm.
The nice thing about the GET method is that you can build this query string with a script and output it as a URL. The outputted HTML doesn't have a form, it has a URLencoded form data appended to a URL. When you want a user or a search engine to bookmark a page built with an application, including the variables using a GET method is the way to go.
The POST method sends the data from the web client as part of the header information as an object separate to the URL. This information can be retrieved from the header and used by the application on the web server. Done manually, it is found as part of the Standard Input sent to the web server.
An alternative method of the scene: M-POST. M-POST is associated with SOAP and XML. It behaves like POST, but in dealing with SOAP and RPC services, an M-POST service can pass through a proxy or firewall where a POST method would be blocked. Outside of SOAP usage, you will not see M-POST. Like much of the XML specification, M-POST has a list of mandatory header that must be supplied. POST and GET are not so rigid.
Both POST and GET leave their data open for view. It would be splitting hairs to say that a hacker of lesser skill could divine information from one method and not the other. So, splitting hairs, it's easier to snoop on the GET method than it is the POST method because all of the form data is part of the URL. Since the GET method is easy to bookmark, it will sit in the browser history. Should someone come by later, sit at the same web browser and retrace the steps of another user, it would be easy picking if the path were laid out with URLs laced with appended query strings. If a developer puts the user name and password as part of a query string sent via a GET method, that is left clear as day as part of the URL. Stupid as this may sound, it is more common that you would think.
The HEAD method is rarely used. It looks identical to the GET method but, as the name implies, it only works with the HEAD, responding with the header and not returning a message. The HEAD method is used primarily to gain the meta-information about the entity implied by the request without transferring the entity-body itself. Search engine spiders will commonly employ the HEAD method to gain document information: title, length, last modified date, format, etc.
In addition, there are several other methods: PUT, DELETE, TRACE and CONNECT. PUT specifies that data sent in this method gets stored under the provided URL. DELETE requests that the server delete information. When used successfully the URL becomes invalid for use by any future methods. PUT and DELETE are each a Pandora's Box of security problems and are rarely used today. The TRACE method is used to invoke a remote, application-layer loop-back of the request message. The CONNECT method is reserved for use with a proxy that can dynamically switch to an SSL tunnel. None of these are commonly used when the web client is interacting with the web server.
Environment Variables
Environmental variables are those passed across the client-server relationship. These variables can be “cooked” or falsified. That isn’t something a novice can do, but it’s not very complex. A wealth of information is sent as "environment variables." Information about the web client and the computer it is housed on, the web server that the web client request is destined for, the time and date, information about the web application that is receiving the request and other sundry data. Much of this input can mitigate what the web application is supposed to do. Does it send out an IE or a Netscape optimized page? Seeing that a Windows platform browser has contacted the web application, should it send back an ActiveX object as part of its response? Seeing that the client machine has an IP address that is consistent with a high speed ISP, should it send back a more complex, bandwidth thirsty response? Environment variables provide useful information, or input, for honing responses.
Making Use Of Input
Raw form data can be stored in its raw form and retrieved for later use. Because it is URL encoded, form data can be added to a flat file or inserted into a database and will not corrupt the database. It’s made for easy transport. This means that raw data can be trapped, stored and reviewed later to help debug applications.
Combining environment variables with form data is a great way to keep the interface easy for users and the use of bandwidth to a minimum.
Some websites will ask the user if they are using Netscape or Internet Explorer. When the user comes to that web site, the web site knows which browser they are using. They are told via the environment variables every time that data from the web client tags along with the URL request. On the client side, JavaScript and VBScript can see whether the browser is Netscape, Internet Explorer or something else. It is far easier to write something to check the environment variables than to ask the user to input the right choice. If you rely on the environment variables instead, you won't have to bother your web visitors and they won't be able to lie about which browser they're using just to have some fun.
If you want to tailor your input for a select few recipients, having an application working in the background is one way to do it. For instance, you could have a Server Side Include on key pages that detected if the users came from a certain set of IP addresses. Based on the input the server received from the environment variables it could then serve out different data to those clients. A web design company that tried to get a lot of work from the government employed this. They sent out company profiles to every arm of government and knew which IP addresses the government had to use to surf their site. So, when a web client with a matching REMOTE_ADDR came to their site, they got a lot of hype aimed at the government that wasn't visible to people using non-government machines. While the REMOTE_ADDR can be duped, this is a way to vary content. Another wider application is to give browser specific code to the client based on what they can best deal with. For example, there is a piece of Dynamic HTML that allows a user to click on a link and automatically make a page their homepage. However, this doesn't work for Netscape 4.x users. So, why torture them? Applications built into the page and executed on the server can remove that code so this feature does not taunt Netscape users.
Dealing with Output
Output is one of the big areas where web applications gain ground on desktop and client-server applications. A simple set of HTML commands can be served as output and create a complex result. The web application can also detect what application and platform is going to receive the content and fine-tune it to best suit that web browser. The user can be on the newest Mac, a 486 running Windows 3.1, or a handheld browser and all of them can share the same potential to get useful output. Because web browsers are so ubiquitous, most of the machines that will stand as the intermediary between your web application and the user will have a browser on board. Even machines that can't run a GUI can browse the web via Lynx and deliver content from a command prompt. Were you to consider output alone, the Web would seem like the best environment to develop an application.
Developers moving to web applications from desktop applications may find the greatest advantage. Most desktop applications today use a graphical interface for output. Desktop applications allow a developer to dictate the appearance and formatting of the output at the expense of platform independence. The web almost completely leaves behind platform dependence in exchange for browser preference; in most cases, Internet Explorer versus Netscape. Application development can be severed from the user-interface development and that GUI development can be handed to anyone who is competent at HTML development.
The client-server perspective doesn't display as great an advantage. Most client-server applications tie into a user interface and need that specific interface to function; or they offer terse, bandwidth conscious responses, like those served out from a telnet session. Responses like that may be friendly to a hardcore programmer, but try “success” as the only response sent your Aunt Martha in Poughkepsie and you’ll discover that most users require more. With that said, web applications offer the double-edged sword of not maintaining state. Once output is served, the web server has nothing to do with its user. The next volley of input may come a minute later or a week later and the web server has spent the intervening time performing other tasks without a single process preoccupied with a dallying user. Compare that with the FTP and telnet processes that suck up server load and you see why developers embrace web applications even with the added headaches that come from state and security (or lack thereof).
Web applications can serve out more than just HTML. They can build JavaScript so that you can send some of the processing to the server. There are graphics libraries for more of the scripting languages in use. PHP scripts can generate PDF documents from scratch. The Swift Generator can build Flash animations on the fly. ASP and COM functions can build OLE documents such as Microsoft Access, Excel and Word documents. XML can be used to orchestrate multimedia through SMIL (Synchronized Multimedia
Integration Language). A script on the server can gather and archive files into a ZIP file and then serve that file as output. One web server response can act like a conductor. It can respond to the client, packed with calls to web applications. When the web client receives the response, it has buried in it a volley of new requests automatically initiated by the web browser.
For instance, a web user could click on a link that looks like this:
http://www.dewolfe.bc.ca/chapter1.php?go=west+young+man
The script called "chapter1.php" would chew on the form data (the form name "go" has "west young man" as its value). After processing, the PHP script could send back:
<title>rm Result</title>
Where should the man go? He should go:
west young man
But, what if the user had bigger things in mind? What if that string of text was supposed to be first step in a complex result? The PHP could instead send this back:
<title>orm Result</title>
In this example, when the PHP response is sent back to the client, it launches three requests to the script called "imagebuilder.cgi" that will come back with its own responses. At the same time, this "imagebuilder.cgi" script can do many things. Remember all of those environment variables that tag along for the ride? This script can serve out data, but it could also retain as much data as it provides about the web client through this passive and almost unavoidable body of passive environmental data.
As long as the user is makes repeated calls to the server, output can spark a cascade of requests and responses from the web client to the server and back again. Each of the requests can be combined into an accurate snapshot of the client machine. If the requests for images are noticeably spaced, this could demonstrate a low bandwidth connection; or the client platform is slow or overtaxed. If the user only grabs one or two pages elements (e.g. HTML and one of the 10 or 20 images), this may tip of your server. Maybe the REMOTE_ADDR is different for each image; or the USER_AGENT changes for no good reason; or the HTTP_REFERER is missing. These client requests might not be valid requests from a user's web browser, but could be the actions of web bot cherry picking content from your site. How you react to that is your decision, reading the environment variables from the client is your chief tool in detecting the health of connection between client and server.
Comments