LOC Workshop on Etexts by Library of Congress (the reading list .txt) π
The Workshop offered rather less of an imaging practicum than planned, but "how-to" hints emerge at various points, for example, throughout KENNEY's presentation and in the discussion of arcana such as thresholding and dithering offered by George THOMA and FLEISCHHAUER.
NOTES: (3) Altho
Read free book Β«LOC Workshop on Etexts by Library of Congress (the reading list .txt) πΒ» - read online or download for free at americanlibrarybooks.com
- Author: Library of Congress
- Performer: -
Read book online Β«LOC Workshop on Etexts by Library of Congress (the reading list .txt) πΒ». Author - Library of Congress
BESSER concluded his talk with several comments on the business arrangement between the Smithsonian and Compuserv. He contended that not enough is known concerning the value of images.
******
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DISCUSSION Creating digitized photographic collections nearly impossible except with large organizations like museums Need for study to determine quality of images users will tolerate *
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
During the brief exchange between LESK and BESSER that followed, several clarifications emerged.
LESK argued that the photographers were far ahead of BESSER: It is almost impossible to create such digitized photographic collections except with large organizations like museums, because all the photographic agencies have been going crazy about this and will not sign licensing agreements on any sort of reasonable terms. LESK had heard that National Geographic, for example, had tried to buy the right to use some image in some kind of educational production for $100 per image, but the photographers will not touch it. They want accounting and payment for each use, which cannot be accomplished within the system. BESSER responded that a consortium of photographers, headed by a former National Geographic photographer, had started assembling its own collection of electronic reproductions of images, with the money going back to the cooperative.
LESK contended that BESSER was unnecessarily pessimistic about multimedia images, because people are accustomed to low-quality images, particularly from video. BESSER urged the launching of a study to determine what users would tolerate, what they would feel comfortable with, and what absolutely is the highest quality they would ever need. Conceding that he had adopted a dire tone in order to arouse people about the issue, BESSER closed on a sanguine note by saying that he would not be in this business if he did not think that things could be accomplished.
******
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LARSEN Issues of scalability and modularity Geometric growth of the Internet and the role played by layering Basic functions sustaining this growth A libraryβs roles and functions in a network environment Effects of implementation of the Z39.50 protocol for information retrieval on the library system The trade-off between volumes of data and its potential usage A snapshot of current trends
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ronald LARSEN, associate director for information technology, University of Maryland at College Park, first addressed the issues of scalability and modularity. He noted the difficulty of anticipating the effects of orders-of-magnitude growth, reflecting on the twenty years of experience with the Arpanet and Internet. Recalling the dayβs demonstrations of CD-ROM and optical disk material, he went on to ask if the field has yet learned how to scale new systems to enable delivery and dissemination across large-scale networks.
LARSEN focused on the geometric growth of the Internet from its inception circa 1969 to the present, and the adjustments required to respond to that rapid growth. To illustrate the issue of scalability, LARSEN considered computer networks as including three generic components: computers, network communication nodes, and communication media. Each component scales (e.g., computers range from PCs to supercomputers; network nodes scale from interface cards in a PC through sophisticated routers and gateways; and communication media range from 2,400-baud dial-up facilities through 4.5-Mbps backbone links, and eventually to multigigabit-per-second communication lines), and architecturally, the components are organized to scale hierarchically from local area networks to international-scale networks. Such growth is made possible by building layers of communication protocols, as BESSER pointed out. By layering both physically and logically, a sense of scalability is maintained from local area networks in offices, across campuses, through bridges, routers, campus backbones, fiber-optic links, etc., up into regional networks and ultimately into national and international networks.
LARSEN then illustrated the geometric growth over a two-year periodβ through September 1991βof the number of networks that comprise the Internet. This growth has been sustained largely by the availability of three basic functions: electronic mail, file transfer (ftp), and remote log-on (telnet). LARSEN also reviewed the growth in the kind of traffic that occurs on the network. Network traffic reflects the joint contributions of a larger population of users and increasing use per user. Today one sees serious applications involving moving images across the networkβa rarity ten years ago. LARSEN recalled and concurred with BESSERβs main point that the interesting problems occur at the application level.
LARSEN then illustrated a model of a libraryβs roles and functions in a network environment. He noted, in particular, the placement of on-line catalogues onto the network and patrons obtaining access to the library increasingly through local networks, campus networks, and the Internet. LARSEN supported LYNCHβs earlier suggestion that we need to address fundamental questions of networked information in order to build environments that scale in the information sense as well as in the physical sense.
LARSEN supported the role of the library system as the access point into the nationβs electronic collections. Implementation of the Z39.50 protocol for information retrieval would make such access practical and feasible. For example, this would enable patrons in Maryland to search California libraries, or other libraries around the world that are conformant with Z39.50 in a manner that is familiar to University of Maryland patrons. This client-server model also supports moving beyond secondary content into primary content. (The notion of how one links from secondary content to primary content, LARSEN said, represents a fundamental problem that requires rigorous thought.) After noting numerous network experiments in accessing full-text materials, including projects supporting the ordering of materials across the network, LARSEN revisited the issue of transmitting high-density, high-resolution color images across the network and the large amounts of bandwidth they require. He went on to address the bandwidth and synchronization problems inherent in sending full-motion video across the network.
LARSEN illustrated the trade-off between volumes of data in bytes or orders of magnitude and the potential usage of that data. He discussed transmission rates (particularly, the time it takes to move various forms of information), and what one could do with a network supporting multigigabit-per-second transmission. At the moment, the network environment includes a composite of data-transmission requirements, volumes and forms, going from steady to bursty (high-volume) and from very slow to very fast. This aggregate must be considered in the design, construction, and operation of multigigabyte networks.
LARSENβs objective is to use the networks and library systems now being constructed to increase access to resources wherever they exist, and thus, to evolve toward an on-line electronic virtual library.
LARSEN concluded by offering a snapshot of current trends: continuing geometric growth in network capacity and number of users; slower development of applications; and glacial development and adoption of standards. The challenge is to design and develop each new application system with network access and scalability in mind.
******
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BROWNRIGG Access to the Internet cannot be taken for granted Packet radio and the development of MELVYL in 1980-81 in the Division of Library Automation at the University of California Design criteria for packet radio A demonstration project in San Diego and future plans Spread spectrum Frequencies at which the radios will run and plans to reimplement the WAIS server software in the public domain Need for an infrastructure of radios that do not move around
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Edwin BROWNRIGG, executive director, Memex Research Institute, first polled the audience in order to seek out regular users of the Internet as well as those planning to use it some time in the future. With nearly everybody in the room falling into one category or the other, BROWNRIGG made a point re access, namely that numerous individuals, especially those who use the Internet every day, take for granted their access to it, the speeds with which they are connected, and how well it all works. However, as BROWNRIGG discovered between 1987 and 1989 in Australia, if one wants access to the Internet but cannot afford it or has some physical boundary that prevents her or him from gaining access, it can be extremely frustrating. He suggested that because of economics and physical barriers we were beginning to create a world of haves and have-nots in the process of scholarly communication, even in the United States.
BROWNRIGG detailed the development of MELVYL in academic year 1980-81 in the Division of Library Automation at the University of California, in order to underscore the issue of access to the system, which at the outset was extremely limited. In short, the project needed to build a network, which at that time entailed use of satellite technology, that is, putting earth stations on campus and also acquiring some terrestrial links from the State of Californiaβs microwave system. The installation of satellite links, however, did not solve the problem (which actually formed part of a larger problem involving politics and financial resources). For while the project team could get a signal onto a campus, it had no means of distributing the signal throughout the campus. The solution involved adopting a recent development in wireless communication called packet radio, which combined the basic notion of packet-switching with radio. The project used this technology to get the signal from a point on campus where it came down, an earth station for example, into the libraries, because it found that wiring the libraries, especially the older marble buildings, would cost $2,000-$5,000 per terminal.
BROWNRIGG noted that, ten years ago, the project had neither the public policy nor the technology that would have allowed it to use packet radio in any meaningful way. Since then much had changed. He proceeded to detail research and development of the technology, how it is being deployed in California, and what direction he thought it would take. The design criteria are to produce a high-speed, one-time, low-cost, high-quality, secure, license-free device (packet radio) that one can plug in and play today, forget about it, and have access to the Internet. By high speed, BROWNRIGG meant 1 megabyte and 1.5 megabytes. Those units have been built, he continued, and are in the process of being type-certified by an independent underwriting laboratory so that they can be type-licensed by the Federal Communications Commission. As is the case with citizens band, one will be able to purchase a unit and not have to worry about applying for a license.
The basic idea, BROWNRIGG elaborated, is to take high-speed radio data transmission and create a backbone network that at certain strategic points in the network will βgatewayβ into a medium-speed packet radio (i.e., one that runs at 38.4 kilobytes), so that perhaps by 1994-1995 people, like those in the audience for the price of a VCR could purchase a medium-speed radio for the office or home, have full network connectivity to the Internet, and partake of all its services, with no need for an FCC license and no regular bill from the local common carrier. BROWNRIGG presented several details of a demonstration project currently taking place in San Diego and described plans, pending funding, to install a full-bore network in the San Francisco area. This network will have 600 nodes running at backbone speeds, and 100 of these nodes will be libraries, which in turn will be the gateway ports to the 38.4 kilobyte radios that will give coverage for the neighborhoods surrounding the libraries.
BROWNRIGG next explained Part 15.247, a new rule within Title 47 of the Code of Federal Regulations enacted by the FCC in 1985. This rule challenged the industry, which has only now risen to the occasion, to build a radio that would run at no more than one watt of output power and use a fairly exotic method of modulating the radio wave called spread spectrum. Spread spectrum in fact permits the building of networks so that numerous data communications can occur simultaneously, without interfering with each other, within the same wide radio channel.
BROWNRIGG explained that the frequencies at which the radios would run are very short
Comments (0)