As the nineteenth and twentieth centuries saw the rise and proliferation of public libraries, in association with a general growth in education, so the twenty-first century is seeing their decline, if not demise.
Among the most prominent excuses for their cutback and closure is the assertion that information available on the internet is better and more complete, therefore with almost general internet access, there is no need for libraries. This article looks briefly at those fallacies.
The internet often lacks detail
Most internet information providers are driven not by philanthropy and altruism, but by popularity, which often determines advertising, which generates revenue. The most prominent exception to that is of course Wikipedia, but even there popularity is a significant consideration. Few are prepared to put effort into writing web pages which hardly anyone reads.
Sites therefore tend to provide information which is most likely to be accessed. If it is fine detail, esoteric, or very specialised, then it is likely to be in short supply, or absent altogether. Academic journals and books, on the other hand, generally thrive by their specialist content, which justifies increasingly high costs. It is not hard to come up with examples, so I will just offer a couple which I have encountered recently.
Ask Google the question “when was Masaccio’s Holy Trinity moved?” and you will not of course get an answer. Instead, the first link offered is to the Wikipedia article on the painting, which gives one date as vaguely being after 1860, and a second in 1952, but without any reference to the source of that information. Although the book containing a detailed account of its movements is listed as further reading, the specific chapter appears in neither the references nor the further reading.
Ask “when did Masaccio visit Rome?” and Wikipedia, again the top hit, gives only the year 1423. Although it references a specialist book which does contain much more information on this, it does not mention the book content which presents the case that Masaccio visited Rome again in 1427.
Internet coverage is often patchy and biased
Print publishers usually identify significant areas and look to commission authors to address those needs. They also publish encyclopaedic works which cover larger fields. As no one co-ordinates internet content, its coverage and depth is much more variable. In some fields, such as practical aspects of computing and technology, it is far superior to anything available in print. But in many, particularly more traditional, areas, it is weak at best.
Many of the loudest and most prolific publishers on the internet are also those with other purposes: promoting products, political agendas, religious beliefs, or plain prejudice. These greatly influence both the quantity and quality of information that is available.
Ask Google for links on “continuous narrative in art”, and you will be taken to another Wikipedia article, this time on Narrative art, which is written almost entirely from a single paper published in 1990 on early Buddhist art, and completely omits larges tranches of important information about European narrative painting.
Inevitably the more prone a topic is to bias and prejudice, the greater the chances of information being distorted or plain wrong.
Information from the internet is almost invariably unedited and unchecked, and often erroneous or misleading
There are so many examples of errors which have accidentally or deliberately been introduced into Wikipedia articles that I will not attempt to cover even the most celebrated – but to reinforce that Wikipedia is usually among the most reliable because it does have editorial policies and active systems to try to maintain the quality of its content. Other sources of information are almost invariably far worse, as there is little or no editorial system at all.
To illustrate the sort of conflicts that arise in non-contentious areas, I looked for two population estimates, the first for Florence in 1400, the second for London at the time of the Great Fire of 1666.
Ask Google for “how many people lived in Florence in 1400” and most of the hits returned contained no information at all that related to the question. The few which did provide an answer did not do so for the date requested. Results ranged from 60,000 in 1425 to 120,000 in 1300.
Figures for London in 1666 were generally easier to obtain, but ranged from 350,000 to 600,000.
In many fields, the internet lags traditional publishing
In more traditional fields where there is still extensive academic publishing, information accessible on the internet lags that published in the leading reference books. Many serious and academic book studies are developed from doctoral theses; the internet now often provides access to the original thesis, but not to the more developed, and thoroughly edited, book.
For example, the Wikipedia page on Georges Seurat, accessed at 1220 on 28 April 2016, made no reference to Michelle Foa’s monograph on the artist, published in the summer of 2015, almost a year ago. Most of its referenced sources were from a book published in 1991, and the most recent listed for further reading was dated 1998. This is in spite of the page being last modified on 16 April 2016.
Even if good information is available on the internet, it is often hard to find
It is a common experience that search results from each of the more popular search engines return a large number of hits which are commercial websites, or irrelevant. Search for almost any painting or painter, and many of the top hits are promoting the sale of prints or painted reproductions of the work.
This can be worked around by tailoring an advanced search. Even university students appear surprisingly bad at using internet search engines effectively and efficiently. Good librarians are trained to undertake effective searches, and are invariably delighted to help readers get the best out of print and online publications.
The best internet resources are not freely available
Although the free and open access movement in academic publishing is growing larger, for the moment and much of the past, the vast majority of papers have been published in commercially-run journals whose contents are not freely available to anyone. The only practical way of obtaining affordable access to that literature is through JSTOR and its relatives. This is now open to many university alumni, but non-graduates find it very hard to get accounts.
In the absence of an account, when you have finally discovered a paper worth reading, you will have to pay reprint charges ranging from $2-40 per article. This makes anything other than very occasional access prohibitively expensive.
Few published books are available freely on the internet
The great majority of papers and books which are published in print come within the scope of copyright, and their copyright holders do not make their content freely available on the internet. Google’s campaign to build a massive online library does include books which are still subject to copyright. However, to satisfy ‘fair use’ requirements, only very small portions of the printed version of a book are included in Google’s scope when publishing to its library.
There are, of course, many vendors who operate at the edges of the internet, offering works which are still in copyright. This is a breach of copyright, and illegal.
The only people who are likely to believe that ‘everything is now available on the internet’ are those who seldom use libraries or the internet, but leave it to paid researchers or (often unpaid) interns to do the work for them.
In truth, online and print publications and other sources of information are usually complementary. The internet enriches, it does not replace. Those who seek to save public expenditure on vital resources such as public libraries should be honest with the public, and tell them that they are reversing two centuries of improvements in education, knowledge, and skills. Because that is exactly what is going to happen.