Technology and Society

Book Reviews
Home
What's New
Privacy & Individual Rights
Commerce, Security, & the Law
Net Culture, Art, & Literature
International Affairs & National Security
Ethics, Rhetoric, & Metaphysics
Science Fiction

Other Resources
News
Publishers
Other Book Review Sites
Letters
Contact
Copyright

Title: The Laws of the Web
Author: Bernardo Huberman
Publisher: MIT Press
Copyright: 2001
ISBN: 0-262-08303-5
Pages: 105
Price: $24.95
Rating: 89%

When someone designs a product, they need to be sure they understand how their prospective customers will actually use the device. They need to envision a prototype that makes functions as easy to understand as possible, and they need to respond to customer feedback if they miss the mark. The same principles hold true at the macro level. Architects and urban planners must understand how people will use and move through an area before deciding on a design.

In The Laws of the Web, Bernardo Huberman describes the techniques he and others at Xerox's Palo Alto Research Center (PARC) use to analyze the behavior of Web surfers. Rather than focus on individual cases, or even a few thousand individuals, Huberman and his colleagues look at how the mass of users traverses the Web. Using techniques from statistical mechanics and nonlinear dynamics, the researchers have found that both Web sites and Web traffic do, in the aggregate, follow predictable patterns. Their data source is the Internet Archive Project, which uses spiders to crawl the Web and copy all pages that can be viewed without a password, so their results are empirical, not theoretical, so communication infrastructure planners, Web site developers, and policy analysts can all use their findings to guide their work.

As it turns out, the distribution of page counts on Web sites follows the power law, which states that the number of sites with n pages can be predicted by the formula

where x is some number greater than or equal to 1. If x were determined to be two, then sites with two pages would make up one fourth of all sites, sites with three pages would make up one ninth of all sites, sites with four pages would make up one sixteenth of all sites, and so on. This distribution, called the lognormal distribution, looks nothing like the normal distribution where values form a bell curve with an identifiable median around which most values occur. Instead, as can be seen from the example where x=2, the number of sites with a given page counts drops precipitously as the page count grows. The plot on page 26 shows that both the distribution of page counts and the number of "out" links on a site are described by the power law, albeit with different values for x

The remainder of the book discusses other results and how those results can be applied to Web commerce, bandwidth needs, traffic projections, and other phenomena. There were times when I was screaming for just a bit more mathematical detail, but Huberman judiciously followed the dictum (which I first encountered in Stephen Hawking's A Brief History of Time), that each equation in a book cuts readership in half! I hope that doesn't happen with The Laws of the Web. It's an intriguing book that gives people without advanced math skills a window into how researchers are helping make sense of the Web.

Curtis D. Frye (cfrye@teleport.com) is the editor and chief reviewer of Technology and Society Book Reviews.  He is the author of three online courses and nine books , including Privacy-Enhanced Business from Quorum Books.