Types of Software Solutions Today

To understand what .NET is and what it offers, it makes sense to examine the landscape today. In this sample chapter, Bart DePetrillo looks at the more popular software design solutions currently in use, followed by the challenges that these solutions present to software developers and users. It also explores how .NET addresses these issues and how it will alter the landscape for the better.

Like some science fiction character, today’s software solutions assume different forms depending on the market they target. When aiming at the consumer and small business marketplace, the preferred shape is the familiar and somewhat innocuous desktop application. Shrink-wrapped desktop applications require onsite installation, which means it’s done directly on the user’s PC. Therefore, desired features must run directly from the user’s machine.

In the larger business environment, the solutions often take on the slightly different form of client/server software. This requires installation on both the server and, in most cases, the user machines. Lastly are the Internet solutions, which generally take the form of portal sites on the Web.

No matter which form the software application assumes, it is still at its heart just a service designed to solve a problem, whether it enables you to work more efficiently or empowers you to do in minutes some task that, prior to the advent of the computer, would have taken hours or days.

Unfortunately, it often seems that computers and software are, in astrological terms, born under the sign of Gemini—their behavior and benefit lean towards the erratic and multifaceted! On the one side, the personal computer–software-technology team has increased productivity many times over (imagined only perhaps by George Orwell or Aldous Huxley), whereas, on the other hand, it has failed us miserably. Although it is a challenge to develop, deploy, and use software today, it also seems impossible to live without it!

Desktop Solutions

Desktop software is the most commonly deployed “service solution” in existence. It is a chameleon that takes many shapes, colors, and sizes depending on the service it is aimed to deliver. However, whatever form the desktop application assumes, a few of its characteristics remain constant, as described in the following sections.

Leveraging the Features of the OS

Desktop applications are designed specifically for the computer operating system (OS) on which they are meant to run. This empowers them to rely on the services or features of the OS, such as file management, Internet support, security, and the basic architectural design and philosophy of the OS.

An example of leveraging the operating system architecture as a service is how developers can design and implement a Windows application using the Component Object Model (COM/COM+). COM is a Windows-specific development model and technology that eases the development of reusable software components. By using COM components, developers can snap together functionality that they previously developed or that was created by a third party. The COM technology also makes it possible to create software that can embed in or link to other software. For example, you can insert an Excel spreadsheet containing your financial projections for the month into a report you are writing using Word.

NOTE: In reality, the features and functionality that COM offers do not come free of work; COM is by no means easy to learn, and there is a significant learning curve to become proficient at it.

Unfortunately, every new generation of a given software solution seems to require a faster computer to get the same performance. The truth of the matter, however, is that performance stagnation is the result of packing the solution with more powerful, processor-intensive features that were not possible on last year’s PC. For example, in 1995, the hottest new Wintel (Windows-Intel–based) PCs were using Intel’s cutting-edge Pentium 133MHz CPU.

Just imagine trying to run some of today’s software, with all the dynamic and processor-intensive features, on that “speed demon” of yesterday—I bet a snail comes to mind! Having developed software for many years, I realize that it’s not a matter of no one having thought of these new features in 1995. It is more probable that it was simply not feasible to add such features back then because of the performance drain; this is how new features become reality—with time and advancements. With all this is mind, desktop applications are generally faster than other types of architectures for general-use applications, such as productivity suites.

Tied and Bound to the OS

The flip side of designing a solution specifically for an OS is that the application is then tied and bound to that particular OS. If the solution is designed for Windows, it will not run on Linux, OS/2, MacOS, or some other system. For the average consumer, this is not an issue. Depending on which trade publication or newspaper article you read, it is estimated that between 80 and 90% of home users use a Microsoft Windows–flavored OS (such as Windows 95, Windows 98, Windows Me, or Windows 2000). The corporate user, in contrast, uses a variety of OSes, ranging from Microsoft Windows to IBM AIX and everything in between, including the trendy open-source OS: Linux. In such a heterogeneous environment, deploying a solution that is bound to a specific OS can be restrictive or even unacceptable.

As you’ll learn more about today’s challenges later, you can now move on and take a look at some of the existing desktop solutions, beginning with probably the most widely known productivity suite: Microsoft Office.

Microsoft Office

As you well know, Microsoft Office contains myriad different applications aimed at enhancing your productivity. These productivity services enable you, as the user, to perform word-processing tasks, send and receive e-mail, create to-do lists, manage your calendar, create presentations, create spreadsheets, and perform a number of other tasks.

Microsoft Word, for instance, enables you to write simple letters or reports with what-you-see-is-what-you-get (WYSIWYG) text formatting. It also includes automated features, such as generating a table of contents for your book or document, and it provides the means to perform mail merges using a document boilerplate and a list of contacts. Furthermore, it contains built-in tools, such as a spell checker and grammar checker, that you can invoke on demand or, in later versions, provide real-time checking and feedback as you type.

Although all these bundled features are empowering, most people do not use all of them, and certainly not all at once. The requirements of the casual home user are different from those of the business or power user. As a home user, you might create a spreadsheet to track your home inventory or basic expenses. Perhaps you might even use some of the reporting and graphing features in Microsoft Excel to get a better handle on where and how you are spending your money so you can better project future expenses. This is, however, a far cry from how you or some others might use Excel at work. If you create spreadsheets for work, it is possible that you create complicated formulas, such as for depreciating the value of inventory or for cost analysis.

The difference in uses for the same Excel application also applies to Word, Outlook, and the other applications in Microsoft Office. It is because Microsoft Office and many other desktop applications target a wide audience that they come packed with countless features; one solution must satisfy the demands of every user. (Granted, not all applications are designed for “general” use, but a good many are.)

Some Negative Aspects of Desktop Applications

Although each successive version of any given desktop application includes additional features, many of them are not aimed at the common user but are niche features that address the needs of a segment of the overall market. Although the users who utilize these features praise their introduction, everyone else scratches their head in wonderment and sometimes in frustration, usually because these new features tend to make the application less intuitive and more unwieldy. To drive the point home, if you use Windows 2000 or Microsoft Office 2000, you will notice that the menus need not show every menu item, but they include an arrow as the last item; clicking on this arrow expands the menu choices to display all available options, not just the most recently used. This feature is actually a user interface (UI) enhancement to Windows to address the fact that showing all available options simply overwhelms most users.

The explosion of readily available features has introduced what is known in the computer field as bloatware. Installation sizes range from large to huge in both the massive amount of hard drive space required to house the solutions and the amount of computer memory (RAM) required to run them. To drive this point home, I have switched to my task manager and checked the amount of memory used by Word 2000 at this time—a whopping 22,164KB! That is over half the RAM that computer systems came with only a few short years ago! In reality, the reason that so much hard drive space and RAM is consumed is because of the development environment of today, which .NET addresses with such things as Web services (more on that later).

Although resource capacity grows with every successive version of an application, performance is the primary victim. Those of us who hinge on paranoia often wonder if bloatware is really just a conspiracy to drive folks to upgrade their PCs. In reality, given today’s development models, the burgeoning size of desktop applications is an unwanted and unintended side affect.

To their credit, some of the more recent desktop applications, such as Microsoft Office 2000, permit more fine-grained feature selection during installation, saving time and hard disk space. However, if you choose not to install a set of features until you need them, the application will then whine until you pop in your installation CD and install whatever you selected. That might be okay if you are a home user and the CD is handy, but if this happens at work, more likely than not it means that you have to call your IT support to come to your rescue—which takes precious time from both of your schedules.

“Does it have to be this way?” is not that simple a question to address.

No, it no longer has to be the way it is today, and you will learn how .NET offers to change the situation later. First, however, take a look at another popular architecture—client/server—that’s used primarily in the business environment. It addresses this and a few other issues that desktop applications do not.

Client/Server Solutions

The easiest way to think of the client/server software architecture is to imagine a desktop application broken into logical pieces and distributed throughout a network of computers. The rationale behind such a design is not important at the moment; trust that software built this way is done rationally and has its particular benefits, some of which you’ll learn about in the coming section.

The client/server model was born from two merging demands. First, as the personal computer became more powerful in the late 1980s and early 1990s, corporations began adopting it as a lower-cost solution to low-end business processing. Essentially, the PC took on the same displacing role that mini-computers had taken against their larger, much more expensive brethren, the mainframes. Companies viewed the PC as a means to make their employees more efficient and flexible than was economically viable with minis or mainframes.

In addition to running the shrink-wrapped desktop productivity applications, corporate information technology (IT) departments, as well as software-consulting companies, began creating desktop applications specifically geared to solving business processes utilizing these relatively cheap PC platforms.

As the PC evolved and inundated the market, IT departments and hardware companies came to realize that while the personal computer empowered each person to do more than was previously possible with the hosted dumb terminals, the need for centralized processing of data (using terms loosely here) would not vanish. However, technology managers and manufacturers both realized that the Intel computer chips, which were driving the corporate PC revolution and the surrounding hardware, had sufficient performance to make it possible for the likes of Compaq to forge a new category of computer: the PC server.

NOTE: The “PC” in PC server is used only to differentiate these Intel-based computer servers from the pre–Intel-based servers. “PC servers” are essentially just souped-up PCs. Granted, PC manufacturers in this market have always added hardware optimized to handle the task at hand, but the basic design and certainly the roots of the server lie with the vanilla desktop personal computers.

Designed not for an individual employee but as a shared resource accessible by multiple employees, the PC servers sat in the back rooms of IT departments. Initially, these machines were used for simple centralized tasks such as storing and accessing company files and data (what became known as file servers), acting as print servers, authenticating users on the corporate network, and, in time, hosting a few small commonly accessible applications. Somewhere along the evolutionary path, software developers (including the corporate IT staffs) came up with the idea of taking the host-terminal model of the previous computing era and evolving it.

The idea was simply to alter the hosting model by replacing the dumb terminals with the already deployed “really smart terminals” (compared to dumb terminals, personal computers even in the late 1980s were Einsteins). The idea was simple: leverage the processing power that the client side of the host model now possessed.

Using PCs and servers had a cost advantage over the mainframes and minicomputers. Also, by utilizing the processing power of both the server and desktop client PCs, developers could create more robust, user-friendly, and efficient solutions than previously possible. Client/server computing was born.

Benefits of Client/Server Computing

The following list outlines some of the benefits and drawbacks of client/server solutions.

  • More for less—Many benefits to client/server (C/S) computing exist over the traditional hosted or standalone desktop application models. As mentioned, companies can utilize lower-cost computers to achieve the same task.Many companies were introducing PCs because their processing power and available software (cheap relative to custom mainframe) provided more bang for the buck. This added employee-side (read: client-side) processing power is what developers use to create a new breed of solutions not previously possible at the same price point.
  • Breaking it all down—Furthermore, application developers can divide solutions into more manageable parts. As with the dumb terminal to mainframe design, the client machine provides a user interface to the solution; however, unlike dumb terminals, the PC-based clients have much more processing power. Therefore, the PC-based terminal offers a much richer user interface and, unlike dumb terminals, can perform business processing.
  • Centralized information storage—While processing in the client/server model is distributed, information storage is centralized. The server stores the data and acts as a coordinator for accessing and modifying information. This minimizes information redundancy and aids in keeping data consistent, even when multiple users/clients are working with it. You might wonder why you need a server at all. Think of client/server computing in terms of a manager-employee relationship, with the (sometimes incorrect) assumption that managers have more knowledge and experience in the particular field. Managers (servers) have more information about the company and day-to-day operations. They also tend to have a deeper understanding of the business processes. Lastly, managers know their department’s priorities, strategy, goals, and outstanding tasks. They then disseminate information as needed and delegate work to their employees.

The employees (clients), on the other hand, might not have as much knowledge and experience as their managers, but they have a more focused job and have access only to the information that their manager provides or that they can infer.

The significant aspect of the delegation process is that employees manage the details of the task they are assigned and execute it based on their own conclusions. Once finished with their work, they report the results back to their manager for further processing (unless, of course, you are a Dilbert fan). Essentially, this is how client/server computing works. It departs from the host-centric model in which only the server has a processor capable of doing anything and the clients (terminals) simply feed information into the server like drones.

Thus, the general design behind client/server software is that the common, processor-intensive services that can logically be centralized are hosted on the PC server, and those less intensive, uncommon, or user-specific features find their way to the desktop PC. This enables people to produce more robust, manageable, and efficient solutions that gain in performance through a divide-and-conquer architecture.

From development and maintenance standpoints, the client/server architecture makes things arguably easier. Generally speaking, the client is easy to implement; its tasks are broken into smaller, simpler tasks that, although imbued with logic, are more mechanical in nature. It is for these reasons that the client side is often implemented using rapid application development (RAD) tools, such as Microsoft Visual Basic, Borland Delphi, and Borland C++ Builder.

The server side, in contrast, is responsible for coordinating all the information that its clients request or with which they respond. Furthermore, it must process this information in additional, often more processor-intensive ways to achieve the desired results. The server components are therefore the most difficult and costly to implement. This is one reason why separation of business logic from the user interface makes the solution easier to develop, deploy, manage, and update. For instance, if your business process changes, you might need to change how you calculate your figures even though the presentation of result remains unchanged. So, you can leave your UI code base in place and modify your server-side code only.

The opposite is also true; if customer feedback necessitates a more intuitive interface, you can update the UI of the software without touching the business logic. This division limits collateral damage—inadvertent introduction of bugs in either the client or server side when working on the other.

Another, perhaps less obvious, benefit of C/S computing is that it is often securely locked away somewhere, which prevents intentional tampering, unintentional accidents, unauthorized access, or surprise interruptions, which can happen when the server is inadvertently turned off by the cleaning staff.

Drawbacks to Client/Server Computing

Although client/server computing has many benefits, it does have its disadvantages.

  • Complicated to implement—Software development is about breaking a problem into pieces, making it easier to solve. To leverage the benefits of distributed processing, the design of client/server solutions often becomes complicated. This contradicts the earlier statement that they are easier to implement. Recall that there is a client side and a server side to this equation. Numerous issues including processing and data synchronization between clients and servers must be addressed depending on the solution architecture,
  • Costly—Distributed computing is inherently more complicated, and therefore requires more highly trained/experienced developers and architects. Obviously this raises production costs.
  • Longer production cycles—The increased complexity again rears its head because the more complicated a solution is, the more time it takes to realize. This also increases the cost of the project.

The Internet Solution

Some view life as a circle. If you examine the software evolution from hosted application to desktop application to client/server application and now to the Internet application, you might think software mimics life! The Internet is, after all, the grandest, most host-centric system ever conceived. The Web browser is a marginally smarter client than the dumb terminal, relegated to rendering graphics and the UIs of client/server and desktop applications, but performing little to no business processing. Browsers instead rely almost completely on the Internet servers to which they connect. Okay, to be fair, Web-based applications can and often do perform some client-side processing using JScript and Dynamic HTML (DHTML). However, except for filling out an information request form that validates some of the data (such as verifying that the customer’s name and address were entered or that the entered date is valid), most processing is performed on the server side.

NOTE: JScript, JavaScript, and ECMAScript are more or less the same language sharing a common heritage. ECMAScript is the standardization of Netscape’s JavaScript and Microsoft’s Jscript. If you are interested in learning more about ECMAScript, visit http://www.ecma.org. For more information on DHTML, check out http://www.w3c.org.

These scripting languages are lightweight programming languages used by Web developers to perform processing on a Web page from the browser (the client side). However, more often than not, business logic isn’t executed with JScript and the Dynamic Hypertext Markup Language (DHTML). Instead, it’s used for user interface-related tasks, such as creating dynamic navigation trees or pop-up context-sensitive windows to aid the surfer.

Thinking of the Web in universal terms, the “big bang” happened around 1993 and spread outward at near–light speed. In the beginning, the Web was a world of information that was magnificently useful but static. Web pages contained few graphics, and the idea of dynamic pages with animations and sound was nothing more than a dream. As the Web became more popular and developed into common medium, corporations took note and, at an increasingly rapid pace, began publishing their presence starting around 1995. It was during the corporate push that the Web started to become more interactive. The corporate invasion (as many in academia view this period of transition from the Web’s birth to commercial use) started a demand for dynamic, user-friendly, and engaging pages. The result was that browser manufacturers, such as Microsoft and Netscape, began competing to supply the corporate demands, introducing features into their browsers in a breakneck upgrade pace that led to sophisticated browsers of today.

As you know from the recently belabored dot.com revolution, during this time many companies were born whose existence orbited solely around the Internet. This was the beginning of what became “the portal wars” between Yahoo!, Excite, Lycos, and AltaVista in the years to follow.

Taking a step back from the front lines for a moment, the idea of a portal really came into existence as a result of browser designers creating the default Web page that’s called up when users start their browsers. Yahoo! and most of the other contenders were originally conceived solely as simple search engines or directories to help Web surfers find Web sites of interest. The need for help finding sites in 1994 and 1995 was, in my opinion, even more of a necessity than today because, in its infancy, the Web was dominated by educators and personal sites. Today you can enter just about any word of interest surrounded by “http://www.” and “.com” and find a site of interest. For this reason, it made practical sense to make a search page the default page of many users.

Returning to the battle once again, when these search sites became companies during the initial public offering (IPO) mania of the late 1990s, they needed to produce revenues, primarily through ad dollars. For this reason, becoming a Web surfer’s default page became the prime objective of these search companies.

It did not take long for the competitors to realize that the greatest power of the Internet—the capability to jump from one source of information to another at a click—was also their biggest threat. The harsh reality: Web customers were viciously loyal, but only for about an Internet minute. Logically, the second field of battle was fought in finding ways to make surfers not only continue to use their sites as the default page, but also in getting users to stay at the site as long as possible. It is from this idea of maintaining surfers’ attentions that the term “stickiness” was coined.

The main strategy employed by these warring Web sites to create stickiness was to offer various services to surfers. The portal wars present an interesting business case study but aren’t within the scope of this book.

Birth of the Internet Solution

As the portal wars heated up, sites began to offer additional services, including news, horoscopes, and stock market updates. The idea was simple: entice surfers to stick around a little longer by catching up on world or local events rather than clicking on some other Web site. However, the nature of the Web beast encourages movement and change. Back to square one: offer new services to once again differentiate.

Internet Service Solutions: Yahoo!, Amazon, and the Like

One of the first portal sites to offer extensive personalization was Yahoo! when it introduced its My Yahoo! Web service. Once you sign up for My Yahoo!, you are free to select the types of news and information you want, including the standard fare of sports updates, entertainment news, local and world news, local weather, and business and investment news. (See Figure 1.)

Figure 1 My Yahoo! provides the user with a personalized portal with which to view Web information.

Yahoo! and other sites that now offer similar services can provide such personalized information in part because, when you sign up, you agree to let them use your personal information, such as your address. The benefit to you is that you can have the local news delivered to you, as well as view the local TV schedule and movie listings.

As competitors have come, challenged, and died, Yahoo! has managed to continue to add services to further differentiate itself from the pack, including the ability to customize the look and feel of its My Yahoo! pages. In a “just because we can and its cool” move, Yahoo! even added the capability to select color schemes for your personalized page (see Figure 2) and makes it possible to add pages, each with different information or tools (which are Web-based applications!).

Figure 2 My Yahoo! provides the user with the ability to customize colors or apply themes.

Yahoo! took the step from mere information Web service to an Internet-based software solution provider when it began offering a Personal Information Manger (PIM), e-mail, and an Internet-based “hard drive.” The latter makes it possible for users to store and retrieve documents from anywhere in the world by simply connecting to the Internet.

The PIM solution is comprehensive and includes an address book, a to-do list, and a calendar. Yahoo! added further value to these “online” solutions by partnering with Starfish Software to make its TrueSync software available. TrueSync enables you to synchronize contacts, calendar, and mail between Yahoo!’s Internet solutions and your offline, desktop applications, such as Outlook, Outlook Express, or Eudora. Additionally, the company introduced Yahoo! Messenger, which is a tool you download and run—sort of like a desktop-based Web application. Do not let the name fool you—this tool is not only for sending and receiving instant messages (like AOL’s AIM and Microsoft’s MSN Messenger), but it is an extension of the online My Yahoo! service. This companion application logs into your account either manually or automatically in order to keep you updated on your stocks, provide news flashes of interest to you (see Figure 3), remind you of scheduled appointments, provide access to your contacts, and more. It is this piece of software that keeps users returning to Yahoo!—even when their browsers are pointed at some other Web site.

Figure 3 Yahoo! Messenger keeps you informed even when your browser is closed.

Yahoo! might have started as a mere directory and search service that saw the potential of becoming a personalized Internet newspaper, but it has matured vastly and become a personalized information center.

As you can see, the services that Yahoo! offers are the first incarnations of Web services. Later you will learn how .NET not only takes this idea further, but also how it facilitates its realization. But first, take a look at another portal site offering first-generation Web services.

Today’s Web Services: Passport

Microsoft provides its own preview of .NET services with its creation of Microsoft Passport. Understanding the where, what, and why behind Passport requires revisiting Yahoo! and the other Web portal services.

Almost every Web site today offers some kind of personalization. This ranges between content and layout customization, such as in My Yahoo! and The Wall Street Journal Interactive Edition, to managing online payment preferences. The one aspect common to all personalization efforts is that you, the Web surfer, need to identify yourself so that your preferences can be saved and restored as you come and go.

Technically, enabling a Web site to store user identities and preferences is rather straightforward. The development team creates a registration page that enables the users to designate a username and password, which the site uses to uniquely identify and authorize each user. A username is analogous to a customer number, serving as an identifier. More often than not, the registration page also requests varying degrees of personal information, such as mail address, e-mail address, and phone number, some of which are later used to personalize content and advertisements.

From the user standpoint, setting up personalization can prove rather time consuming. Most sites require you to register and create a username and password that becomes your identity. Because this identity process is first come, first served, often your choice of username is taken and you must choose another. One challenge is remembering which name/password combination you used on each site.

One of the key concepts behind the Passport service is that it manages information that is uniquely yours. Conceptually, it’s like the wallet in your back pocket or purse: It stores personal information such as your mailing address, phone number, and birth date. It can securely store your credit card and other sensitive information (see Figure 4), and it can manage the various username/password combinations you have with different Web sites.

Figure 4 Microsoft Passport wallet service page stores your credit card information securely.

Essentially, it aims to make your Web surfing experience as effortless as possible by alleviating the need to re-enter or remember information that is unique to you. For instance, if you purchase an item from a site that utilizes Passport, you do not have to retype your name, credit card number, expiration date, and billing address every time you shop at another site. Passport conveys this information to the site without requiring you to get actively involved. This saves you time and aggravation; however, Passport releases only the information that you authorize. Passport is an example of a non–.NET Web service.

NOTE: If you are interested in seeing a complete list of e-tailers that use Passport, point your browser to http://www.passport.com/Directory.

Passport, however useful now, is mostly built on yesterday’s Web or proprietary technologies. It does not leverage the open technology solutions that underlie .NET, including XML, SOAP, and UDDI. These services enable Web sites to more easily integrate; this, in turn, is what will make Web services ubiquitous.

Credit: informit

Leave a Reply

Your email address will not be published. Required fields are marked *