E-books and e-publishing by Samuel Vaknin (essential reading .TXT) 📕
Read free book «E-books and e-publishing by Samuel Vaknin (essential reading .TXT) 📕» - read online or download for free at americanlibrarybooks.com
- Author: Samuel Vaknin
- Performer: -
Read book online «E-books and e-publishing by Samuel Vaknin (essential reading .TXT) 📕». Author - Samuel Vaknin
developers to meet this challenge by developing products that
support any language. This simplifies testing and
configuration management, accelerates time to market, lowers
unit costs and allows companies to quickly and easily enter
new markets and offer greater levels of personalization and
customer satisfaction.”
GlobalVu converts text to device-independent images.
GlobalEase Web is a “Java-based multilingual text input and
display engine”. It includes virtual keyboards, front-end
processors, and a contextual processor and text layout engine
for left to right and right to left language formatting. They
have versions tailored to the specifications of mobile
devices.
The secret is in generating and processing images (bitmaps),
compressing them and transmitting them. In a way, WordWalla
generates a FACSIMILE message (the kind we receive on our fax
machines) every time text is exchanged. It is transparent to
both sender and receiver - and it makes a user-driven
polyglottal Internet a reality.
Deja Googled
By: Sam Vaknin
http://groups.google.com/
http://groups.google.com/googlegroups/archive_announce.html
The Internet may have started as the fervent brainchild of
DARPA, the US defence agency - but it quickly evolved into a
network of computers at the service of a community. Academics
around the world used it to communicate, compare results,
compute, interact and flame each other. The ethos of the
community as content-creator, source of information, fount of
emotional sustenance, peer group, and social substitute is
well embedded in the very fabric of the Net. Millions of
members in free, advertising or subscription financed, mega-sites such as Geocities, AOL, Yahoo and Tripod generate more
bits and bytes than the rest of the Internet combined. This
traffic emanates from discussion groups, announcement
(mailing) lists, newsgroups, and content sites (such as
Suite101 and Webseed). Even the occasional visitor can find
priceless gems of knowledge and opinion in the mound of trash
and frivolity that these parts of the web have become.
The emergence of search engines and directories which cater
only to this (sizeable) market segment was to be expected. By
far the most comprehensive (and, thus, less discriminating)
was Deja. It spidered and took in the exploding newsgroups
(Usenet) scene with its tens of thousands of daily messages.
When it was taken over by Google, its archives contained more
than 500 million messages, cross-indexed every which way and
pertaining to every possible (and many impossible) a topic.
Google is by far the most popular search engine yet, having
surpassed the more veteran Northern Lights, Fast, and Alta
Vista. Its mind defying database (more than 1.3 billion web
pages), its caching technology (making it, in effect, one of
the biggest libraries on earth) and its site ranking (by
popularity and links-over) have rendered it unbeatable. Yet,
its efforts to integrate the treasure trove that is Deja and
adapt it to the Google search interface have hitherto been
spectacularly unsuccessful (though it finally made it two and
a half months after the purchase). So much so, that it gave
birth to a protest movement.
Bickering and bad tempered flaming (often bordering on the
deranged, the racial, or the stalking) are the more repulsive
aspects of the Usenet groups. But at the heart of the debate
this time is no ordinary sadistic venting. The issue is: who
owns content generated by the public at large on computers
funded by tax dollars? Can a commercial enterprise own and
monopolize the fruits of the collective effort of millions of
individuals from all over the world? Or should such
intellectual property remain in the public domain, perhaps
maintained by public institutions (such as the Library of
Congress)? Should open source movements gain access to Deja’s
source code in order to launch Deja II? And who owns the
copyright to all these messages (theoretically, the authors)?
Google, as Deja before it, is offering compilations of this
content, the copyright to which it does not and cannot own.
The very legal concept of intellectual property is at the crux
of this virtual conflict.
Google was, thus, compelled to offer free access to the
CONTENT of the Deja archives to alternative (non-Google)
archiving systems. But it remains mum on the search
programming code and the user interface. Already one such open
source group (called Dela News) is coalescing, although it is
not clear who will bear the costs of the gigantic storage and
processing such a project would require. Dela wants to have a
physical copy of the archive deposited in trust with a dot
org.
This raises a host of no less fascinating subjects. The Deja
Usenet search technology, programming code, and systems are
inextricable and almost indistinguishable from the Usenet
archive itself. Without these elements - structural as well as
dynamic - there will be no archive and no way to extract
meaningful information from the chaotic bedlam that is the
Usenet environment. In this case, the information lies in the
ordering and classification of raw data and not in the content
itself. This is why the open source proponents demand that
Google share both content and the tools to access it. Google’s
hasty and improvised unplugging of Deja in February only
served to aggravate the die-hard fans of erstwhile Deja.
The Usenet is not only the refuge of pedophiles and neo-Nazis.
It includes thousands of academically rigorous and research
inclined discussion groups which morph with intellectual
trends and fashionable subjects. More than twenty years of
wisdom and erudition are buried in servers all over the world.
Scholars often visit Usenet in their pursuit of complementary
knowledge or expert advice. The Usenet is also the
documentation of Western intellectual history in the last
three decades. In it invaluable. Google’s decision to abandon
the internal links between Deja messages means the
disintegration of the hyperlinked fabric of this resource -
unless Google comes up with an alternative (and expensive)
solution.
Google is offering a better, faster, more multi-layered and
multi-faceted access to the entire archive. But its brush with
the more abrasive side of the open source movement brought to
the surface long suppressed issues. This may be the single
most important contribution of this otherwise not so opportune
transaction.
Maps of Cyberspace
By: Sam Vaknin
“Cyberspace. A consensual hallucination experienced daily by
billions of legitimate operators, in every nation, by children
being taught mathematical concepts…A graphical
representation of data abstracted from the banks of every
computer in the human system. Unthinkablecomplexity. Lines of
light ranged in the non-space of the mind, clusters and
constellations of data. Like city lights, receding…”
(William Gibson, “Neuromancer”, 1984, page 51)
http://www.ebookmap.net/maps.htm
http://www.cybergeography.org/atlas/atlas.html
At first sight, it appears to be a static, cluttered diagram
with multicoloured, overlapping squares. Really, it is an
extremely powerfulway of presenting the dynamics of the
emerging e-publishing industry. R2 Consulting has constructed
these eBook Industry Maps to “reflect the evolving business
models among publishers, conversion houses, digital
distribution companies, eBook vendors, online retailers,
libraries, library vendors, authors, and many others. These
maps are 3-dimensionaloffering viewers both a high-level
orientation to the eBook landscape and an in-depth look at
multiple eBook models and the partnerships that have formed
within each one.” Pass your mouse over any of the squares and
a virtual floodgate opens - a universe of interconnected and
hyperlinked names, a detailed atlas of who does what to whom.
eBookMap.net is one example of a relatively novel approach to
databases and web indexing. The metaphor of cyberspace comes
alive in spatial, two and three dimensional map-like
representations of the world of knowledge in Cybergeography’s
online “Atlas”. Instead of endless, static and bi-chromatic
lists of links - Cybergeography catalogues visual,recombinant
vistas with a stunning palette, internal dynamics and an
intuitively conveyed sense of inter-relatedness. Hyperlinks
are incorporated in the topography and topology of these
almost-neural maps.
“These maps of Cyberspaces - cybermaps - help us visualise and
comprehend the new digital landscapes beyond our computer
screen, in the wires of the global communications networks and
vast online information resources. The cybermaps, like maps of
the real-world, help us navigate the new information
landscapes, as well being objects of aesthetic interest. They
have been created by ‘cyber-explorers’ of many different
disciplines, and from all corners of the world. Some of the
maps … in the Atlas of Cyberspaces … appear familiar,
using the cartographicconventions of real-world maps, however,
many of the maps are much more abstract representations of
electronic spaces, using new metrics and grids.”
Navigating these maps is like navigating an inner, familiar,
territory.
They come in all shapes and modes: flow charts, quasi-geographical maps, 3-d simulator-like terrains and many
others. The “web Stalker” is an experimental web browser which
is equipped with mapping functions. The range of applicability
is mind boggling.
A (very) partial list:
The Internet Genome Project - “open-source map of the major
conceptual components of the Internet and how they relate to
each other”
Anatomy of a Linux System - Aimed to “…give viewers a
concise and comprehensive look at the Linux universe’ and at
the heart of the poster is a gravity well graphic showing the
core software components,surrounded by explanatory text”
NewMedia 500 - The financial, strategic, and other inter-relationshipsand interactions between the leading 500 new
(web) media firms
Internet Industry Map - Ownership and alliances determine
status, control, and access in the Internet industry. A
revealing organizational chart.
The Internet Weather Report measures Internet performance,
latency periods and downtime based on a sample of 4000
domains.
Real Time Geographic Visualization of WWW Traffic - a
stunning, 3-d representation of web usage and traffic
statistics the world over.
WebBrain and Map.net provide a graphic rendition of the Open
Directory Project. The thematic structure of the ODP is
instantly discernible.
The WebMap is a visual, multi-category directory which
contains 2,000,000 web sites. The user can zoom in and out of
sub-categories and “unlock” their contents.
Maps help write fiction, trace a user’s clickpath (replete
with clickable web sites), capture Usenet and chat
interactions (threads), plot search results (though Alta Vista
discontinued its mapping service and Yahoo!3D is no more),
bookmark web destinations, and navigate through complex sites.
Different metaphors are used as interface. Web sites are
represented as plots of land, stars (whose brightness
corresponds to the web site’s popularity ranking), amino-acids
in DNA-like constellations,topographical maps of the ocean
depths, buildings in an urban landscape, or other objects in a
pastoral setting. Virtual Reality (VR) maps allow information
to be simultaneously browsed by teams of collaborators,
sometimes represented as avatars in a fully immersive
environment. In many applications, the user is expected to fly
amongst the data items in virtual landscapes. With the advent
of sophisticated GUI’s (Graphic UserInterfaces) and VRML
(Virtual Reality Markup Language) - these maps may well show
us the way to a more colourful and user-friendly future.
The Universal Intuitive Interface
By: Sam Vaknin
The history of technology is the history of interfaces - their
successes and failures. The GUI (the Graphic User Interface) -
which replaced cumbersome and unwieldy text-based interfaces
(DOS) - became an integral part of the astounding success of
the PC.
Yet, all computer interfaces hitherto share the same growth-stunting problems. They are:
(a) Non-transparency - the workings of the hardware and
software (the “plumbing”) show through
(b) Non-ubiquity - the interface is connected to a specific
machine and, thus, is non-transportable
(c) Lack of friendliness (i.e., the interfaces require
specific knowledge and specific sequences of specific
commands).
Even the most “user-friendly” interface is way too complicated
for the typical user. The average PC is hundreds of times more
complicated than your average TV. Even the VCR - far less
complex than the PC - is a challenge. How many people use the
full range of a VCR’s options?
The ultimate interface, in my view, should be:
(a) Self-assembling - it should reconstruct itself, from time
to time, fluidly
(b) Self-recursive - it should be able to observe and analyze
its own behavior
(c) Learning-capable - it should learn from its experience
(d) Self-modifying - it should modify itself according to its
accumulated experience
(e) History-recording
It must possess a “picture of
Comments (0)