Network Working Group C. Malamud
Internet-Draft M.T. Rose
Expires: September 30, 2000 Invisible Worlds, Inc.
April 1, 2000
Maps, Space, and Other Metaphors for Metadata
draft-mrose-blocks-metadesign-00
Status of this Memo
This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC2026 except that the right to
produce derivative works is not granted. (If this document becomes
part of an IETF working group activity, then it will be brought into
full compliance with Section 10 of RFC2026.)
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as
Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on September 30, 2000.
Copyright Notice
Copyright (C) The Internet Society (2000). All Rights Reserved.
Abstract
This memo describes the design principles for the Blocks[1]
architecture. The Blocks architecture focuses on the management of
metadata.
To subscribe to the Blocks discussion list, send email to
blocks-request@invisible.net[15]; there is also a developers site at
http://mappa.mundi.net[16].
Malamud & Rose Expires September 30, 2000 [Page 1]
.
Internet-Draft Metadata April 2000
Table of Contents
1. Map as Metaphor . . . . . . . . . . . . . . . . . . . . . . . 3
2. Space as Better Metaphor . . . . . . . . . . . . . . . . . . . 6
3. Distributing Search . . . . . . . . . . . . . . . . . . . . . 8
4. Avoiding the OSIfication of Space . . . . . . . . . . . . . . 10
5. The Blocks Architecture . . . . . . . . . . . . . . . . . . . 14
5.1 Current Status . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Things We Left Out . . . . . . . . . . . . . . . . . . . . . . 14
References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 17
Full Copyright Statement . . . . . . . . . . . . . . . . . . . 18
Malamud & Rose Expires September 30, 2000 [Page 2]
.
Internet-Draft Metadata April 2000
1. Map as Metaphor
In 1990, while working under the tutelage of the legendary Arlington
Hewes at The Phone Company (tpc.int),[2] we posited the need for Yet
Another Protocol (YAP). At the time, a variety of application
protocols were under active development, the web had not yet been
fully born, and the TCP protocol had not yet been revised to make
the port number a constant.
We immediately began work on the protocol, naming it Blocks. We
then deferred any future specification of the details for 10 years
in order to properly ponder the problem.
The guiding metaphor of the Blocks protocol was the map. The pain
that produced the metaphor was network topology.
In those days, the Cisco router was not the marvel of plug-and-play
interoperability that we know today. There were times when, through
inattentive reading of the step-by-step documentation, we screwed up
the routes and rendered our networks inoperable.
An old network management trick, when you screw up your network
cloud, is to telnet into the closest router using the IP address
(the DNS name being somewhat useless if you can't see your DNS
server). Once on that first router, you telnet to the next one
until, after crawling over a series of data links, you reach the
offending misconfigured router, chant a few incantations, and repair
the net.
A map of the network topology seemed an obvious tool in assisting
such misguided operations (we leave aside the obvious issue of the
server that has your map being in the unreachable part of your
network). Indeed, maps of other people's networks might even be a
useful tool for identifying services available to the outside world,
say an ftp, whois, or finger server. Thus was born the idea of a
protocol that would assist in mapping the Internet, sharing those
maps, and giving people the ability to step above the net and look
around.
Many people regarded the SNMP protocol suite[3] as the mechanism
that would allow these maps to be built by network management
software. In retrospect, SNMP had a much more precise function.
Rather than being the magic bullet that reached the holy grail of
visualizing network topology on the fly, SNMP ended up being a means
of instrumenting network devices, allowing data such as the number
of packets dropped on a router or the average power consumption of a
UPS to be made available to a network management agent.
The extensible nature of SNMP, through the use of MIBs, made it a
Malamud & Rose Expires September 30, 2000 [Page 3]
.
Internet-Draft Metadata April 2000
valuable model for protocol development. The core protocol was fixed
and simple, yet the MIB effort allowed anyone and everyone to draft
a method of instrumenting devices from toasters to satellites. If a
monkey wanted to draft a banana tree MIB, the SNMP protocol
mechanism allowed the monkey to do so, and it was up to the user
base and the market to decide if that MIB came into broad use.
However, the requirements of the banana MIB did not impact the core
protocol or even the development of other MIBs.
Unfortunately, SNMP did not solve the problem of divining network
topology. It was obviously a valuable source of data, but any
information gleaned from skulking SNMP-accessible devices would have
to be combined with data from a wide variety of other resources such
as the routing tables. We thus moved from the issue of visualizing
the network to that of a flexible mechanism for resource discovery.
The rich visual map of a network built from a large number of
resource discovery mechanisms was certainly a tool for the
professional network manager, but we actually envisioned that this
mechanism would prove useful to end users who would want to see
network topology as a means towards better navigation.
We had a scenario why end users would care about network topology, a
notion that was considered somewhat heretical by the "Internet hides
all topology and transparently connects end nodes" oral tradition
that serves as the Internet architecture. This school of thought
felt that networks should always be transparent and that even things
like domain names would be hidden from the user.
Our scenario on why the end user should care ran something like
this. Let's say you engaged in an MBONE video session with a user
named Deering at Xerox Parc. In 1990 and 1991, multicast was just
beginning the hypergrowth that has led to our modern MBONE, and
session directory protocols like SDP[4] did not exist. If the face
at the other end of the video screen says something interesting in a
video conference, your first inclination would be to look around the
subnetwork that is the source of the video. Is there perhaps an
FTP, mail, or finger server on that subnet? Is there a little FTP
server on the same machine as the video? Most likely a personal
archive of documents. Perhaps a "big" server on the same subnet (as
evidenced by the number of documents, size of machine, or kind of
link)? Perhaps the departmental server!
We thus saw maps of networks as a service location and navigation
tool. And, if the resource discovery and map construction could be
based on a well-defined protocol, perhaps the effort of mapping the
entire Internet could be accomplished as a highly-distributed
enterprise. Indeed, such a protocol would allow one group to map a
network and then share that map with other people. The key to these
Malamud & Rose Expires September 30, 2000 [Page 4]
.
Internet-Draft Metadata April 2000
maps was the distributed collection of data, the ability to add to
and personalize the data collected, and the ability to construct and
share different views or interpretations of the underlying topology.
Maps are a metaphor, and one can argue that maps of network
topologies are the wrong metaphor to be pursuing. After all, on the
Internet, one can build virtual worlds. Maps of topology had no
interest to many people, but building virtual worlds attracted a
huge following to efforts such as VRML.
Our feeling was that if you were going to build virtual worlds, you
shouldn't start from scratch. You should start with the real world
(and to us the mass of data and servers that is the Internet is the
real world) and use that as a bootstrap mechanism, the raw materials
that people would use to build virtual worlds.
Network topology as a useful tool to the end user and as the raw
material of an effort to construct virtual worlds became the
elevator pitch. Map became the metaphor.
Malamud & Rose Expires September 30, 2000 [Page 5]
.
Internet-Draft Metadata April 2000
2. Space as Better Metaphor
In 1998 (after several unsuccessful attempts to spin up the
efforts), we formed Invisible Worlds and met several times with our
Protocol Advisory Board. The Protocol Advisory Board provides advice
and direction on the core specifications for the Blocks Protocol
Suite. The Protocol Advisory Board is consensus driven, which means
its advice is not necessarily the product of, or agreed to by, any
particular member. Note that the participation of these people on
the Protocol Advisory Board does not consist of or imply an
endorsement by the member's respective employers, nor does their
participation consist of an endorsement in any of their various
official capacities. The members of the PAB at the time included
David Clark, David Crocker, Paul Vixie, Paul Mockapetris, and Steve
Deering.
We were determined to proceed on the production of an Internet
Atlas, a large-scale effort to map the Internet.
But, what does mapping the Internet mean? A few people bought the
proposition that you started with a critical mass of information
from network topology and used this as a bootstrap mechanism, but
not everybody was convinced. What became clear was that the map was
the wrong metaphor. And, because the map was the wrong metaphor, we
were solving the wrong problem.
A map of a network topology assumes there is something to map. And,
everybody is going to want to map something different. The map
assumes a space to be mapped. Space is the proper metaphor and the
map is one possible visualization of that space. Our first
architectural principle thus became the late binding of the
collection of the data to the means of visualizing that information.
Once we realized we were looking at a data flow architecture with
resources being discovered and then visualized in a variety of ways,
it became equally clear that the problem we were dealing with was a more
general problem, the management of metadata. Metadata defines a
space and is the raw material that one uses to navigate that space.
Space as a metaphor proved quite powerful, with immediate
applications to maps (or other navigation means) not only of network
topology, but of spaces such as the web, or a particular collection
of related information.
Our job became one of providing the user with what David Clark calls
the "up" button. Given a resource on the Internet, say a document
or a router, our job became that of giving the user the ability to
hit an up button, take a step above, look around, and "see" what
resources are nearby. According to Clark, our system should allow
Malamud & Rose Expires September 30, 2000 [Page 6]
.
Internet-Draft Metadata April 2000
the user to define what "near" means in any given context.
Our canonical example of a space became what we call "deep wells" of
information. Take the SEC's EDGAR system as the beginning of a
space.
EDGAR is a constant flow of filings by public corporations that
accumulates over time. The dimensions of the space are pieces of
metadata that are in common over several of these filings, such as the name of the filing corporation, the company's state of incorporation, the form type, or the company's Standard Industrial Classification.
In this space, an annual report (known as a 10-K) by Cisco might be
"near" other objects on a variety of dimensions. An earlier 10-K by
Cisco, other filings by Cisco, a 10-K by other companies in the same
SIC code, or a 10-K in other companies that Cisco has invested in
are all objects that are near the annual report in question.
The deep wells of information form the same bootstrap mechanism that
we had hoped to achieve with network topology. The EDGAR database
has several hundred thousand documents that are rich in metadata.
One of the things we realized, however, is that resource discovery
and data mining are processes that are difficult to do. There are
many different algorithms to use to discover things ranging from
simple transformations based on regular expressions to complex
linguistic analysis to determine the presence of certain forms of
business events (e.g., "any evidence of insolvency disclosed in this
document").
A requirement for our application (and hence for the underlying
protocol supporting the application) was that many different methods
of resource discovery had to be able to coexist. Spaces may be
defined through a simple process (e.g., taking each EDGAR document
and creating some metadata), but it must also be possible for the
spaces to accumulate over time. In particular, we wanted many
different resource discovery mechanisms to coexist peacefully with
each other and with human beings.
It became clear that we needed to begin defining some architectural
principles. To illustrate why this "location" and "navigation"
protocol called Blocks was needed, we'd been giving the example of
the pain caused by search engines that returned 30,000 results on a
simple query. Why were the search engines not solving this problem
of navigation?
Malamud & Rose Expires September 30, 2000 [Page 7]
.
Internet-Draft Metadata April 2000
3. Distributing Search
The modern search engine (indeed, any portal, vortal, or other
buzzword denoting a large amount of information served on-line) is
in a sense the classic centralized service. Crawling the net for
keywords, indexing and searching the database of keywords, and
preparing the results as a web page are all bundled together in a
single proprietary, centralized solution (centralized in the sense
of the administrative boundaries, not in the sense of the numbers of
computers needed to create any one service such as Yahoo).
While a Google, Yahoo, or Altavista are all complete solutions,
there is no interoperability among these services. Only through
cheap hacks can one ask a question of Google and Altavista, and then
combine the results together. While there is no interoperability
among these services, it is clear that each of them has created a
space, a view of the underlying network.
The lack of interoperability among search engines and portals was
certainly one issue, but these solutions also missed some of the
things we considered important, particularly programmatic access and
late binding to visualization. While one can perform cheap hacks to
programmatically access a Google, this is certainly not a
satisfactory solution (indeed, if you send too many such
programmatically-constructed queries, the system is likely to start
refusing them). And, for the results returned, those are always in
the format that the portal decides is appropriate: a banner ad, a
few results, some formatting they decided looks good.
We took the modern search engine and asked ourselves what we could
do to chop the monolithic application up into several pieces, each
specializing in a specific task. Our solution came up with 3
pieces, each focusing on a piece of the puzzle:
o Finding things on the net, a process we call mixing
o Managing that metadata, the function of a traditional server
o Preparing that metadata for a particular application and user
interface, a process we call building
While the process of mixing can be achieved by a global web crawler
(indeed a global bot is a mixer), our philosophy and hence the
software we've built focuses on more limited, specialized crawling
inside of deep wells and specifically targeted other resources.
While leading-edge, all-inclusive algorithms to read every word on
the Web are certainly an honorable activity, we also wanted to make
sure that our architecture would leave room for more specialized
agents under the control of domain experts. We wanted these
Malamud & Rose Expires September 30, 2000 [Page 8]
.
Internet-Draft Metadata April 2000
specialized mixers to be easy to make.
A mixer, ideally, should be able to exist with many other mixers to
create a space. Each mixer, focusing on a few tasks of limited
scope, contributes a set of resources to the task. Since mixers can
extract metadata not only from the underlying network but from the
information produced by other mixers, the process becomes
incremental. If today's big 5 search engines can only index 20-30%
of the network, our vision is of 1 million little mixers, each
examining 1%.
While the mixers are specialized modules, we left out any
specification of how the mixers find things. These are
implementation details. The server, on the other hand, clearly
needs to be in the middle of an hourglass, with a very simple, fast
and well-defined interface to the mixers.
One of our goals with the server was to use index and search
techniques such as SQL databases and full-text engines, software
that has become a commodity. Rather than writing our own, our goal
was to have the server use these commodities as the engine, and to
hide the details of any one commodity from the mixer.
The mixer uses a very loose definition on one side of the hour glass
("find things") and a very tight definition of the interface to the
server. Likewise, the builder has a very clearly defined interface
to the server, and a very loose definition on the outside of the
hourglass. The job of the builder is to pipe data into any user
interface (or other output source).
The architectural philosophy of builders, mixers, and servers has
been expressed in an Architectural Precepts document, in a core
protocol (BXXP), and in a metadata application called the Simple
Exchange Profile. These building blocks form the framework, but beg
the issue of what to do with that framework.
Malamud & Rose Expires September 30, 2000 [Page 9]
.
Internet-Draft Metadata April 2000
4. Avoiding the OSIfication of Space
The architecture of mixers, builders, and servers seemed like a
promising one for chopping up the search engine functionality into a
distributed system. But, what language to use to describe the
spaces?
Here, we entered a world that is very old yet highly immature.
Schemas for the description of spaces date back to X.500, and a
variety of efforts throughout the years have attempted to create the
ultimate global directory.
For our spaces, we envisioned something a little different than the
X.500 concepts of country servers, state servers, city servers, and
institutional servers, all working together to format information
about our global population into one framework. Whereas X.500
organized the world in terms of geography and people, we saw spaces
as a much more abstract, flexible construct. In other words, we
needed a language for describing spaces that helped define a schema,
yet was schema agnostic enough to accommodate a wide variety of
different kinds of metadata.
While HTML didn't serve this purpose very well, it was quickly clear
that XML had emerged as the data description language for the next
millennium. XML has some properties that make it quite attractive.
First, the model of nested documents works well for our world of
objects that contained objects and relationships to other objects.
The XML committee's decision to simplify SGML, yet still support
proper characters through the use of the UTF-8 and UTF-16 subsets of
Unicode make XML a simple but very powerful language.
XML is the generic underpinning, a language for describing data. We
then looked at a variety of other XML-based initiatives to see if
they added power to our ability to describe spaces. The most
promising initiative is the Resource Description Framework. RDF
evolved out of the earlier PICS platform of the W3C, but serves a
much broader role than simply blocking out pornography sites.
Indeed, documents[5] from the W3C explain that this framework serves
a large number of goals, including:
o interoperability of metadata
o machine understandable semantics for metadata
o better precision in resource discovery than full text search
o future-proofing applications as schemas evolve
o a uniform query capability for resource discovery
Malamud & Rose Expires September 30, 2000 [Page 10]
.
Internet-Draft Metadata April 2000
o a processing rules language for automated decision-making about
Web resources
o language for retrieving metadata from third parties
In addition, Tim Berners-Lee, in a technical note[6] further
explains that metadata (and it's instantiation through the RDF
framework) "will allow huge amounts of information in databases and
existing applications to be put on the web, not just for human
browsing but for machine understanding: searching, reasoning, and
analyzing."
Given these technical goals, it seemed to make sense to leverage the
RDF effort for our own application. We thus looked at a variety of
RDF specification documents and examples. A typical example is the
following by Eric Miller:[8]
John Smith
The example illustrates a variety of concepts from the XML world.
First is the concept of namespaces[7], defined by Bray et. al. for
the W3C as a "collection of names, identified by a URI reference"
which provides a mechanism for software to "recognize and act on
these declarations and prefixes." In other words, the namespace is
a scoping mechanism. In this example, there are two types of names:
RDF names, and the Dublin Core. The Dublin Core is an earlier
mechanism for tagging metadata.
This example illustrates the concept of "triplets" on which RDF is
based (e.g., "URI" "about" "person"). The target here is a URI, the
action is a "description" and the Dublin Core Creator is John Smith.
What is interesting is the mixing of different schemas and schemes.
A further example serves to illustrate the mixing of schemes, in
this case based on the popular VCARD,[9] a digital business card
that is often attached to email messages:
Malamud & Rose Expires September 30, 2000 [Page 11]
.
Internet-Draft Metadata April 2000
Frank DawsonDawsonFrank+1-617-693-8728+1-919-676-95156544 Battleford DriveRaleighNC27613-3502USFrank_Dawson@Lotus.com
Finally, we look at a third example, this one from the Microsoft
BIZTALK framework:[10]
Malamud & Rose Expires September 30, 2000 [Page 12]
.
Internet-Draft Metadata April 2000
12345INVOICEbetterDogFood.COM1179 N. McDowell BlvdPetalumabetterDogFood.COM1179 N. McDowell Blvd.PetalumaAlpo114.00
One can argue that a VCARD or a BizTalk purchase order are not
metadata. However, it became clear to us that there would be a
variety of schemes advanced for the description of metadata and that
any mechanism we put into place should be agnostic to those schemes,
allowing space makers to use the mechanism that fits most naturally
into their particular space.
Malamud & Rose Expires September 30, 2000 [Page 13]
.
Internet-Draft Metadata April 2000
5. The Blocks Architecture
5.1 Current Status
The Blocks architecture was previously defined in a series of
Internet-Drafts describing the core architecture,[1] the BXXP
application framework,[11] and the Simple Exchange Profile.[12] A
conversational description of the design rational[13] for the BXXP
application framework were also previously described.
The protocol has been implemented as a series of 3 software modules
that were then applied to several "deep wells" of information,
including the SEC's EDGAR databases. The mixer software is
implemented in the Tcl and PERL languages, the SpaceServer is
implemented in Tcl and uses Verity, Oracle and other commercial
datastores to store, index, and retrieve metadata. Finally,
builders have been implemented in Tcl with the most significant
focus being on builders that act as a web proxy with the Apache web
server. Many of these modules have been described on the developers
site at http://mappa.mundi.net/.
5.2 Things We Left Out
The metadata framework we have designed explicitly left out the
definition of several key issues. In particular, we are schema and
namespace agnostic, allowing a variety of metadata models to be
defined. In the particular case of EDGAR and other metadata
repositories we have used to test and develop software, no use of
mechanisms such as RDF have been directly employed.
The system of servers, mixers, and builders provide a distributed
solution, but the solution is one of "islands of distribution." It
is up to mixers and space servers to know the DNS address and port
number of a particular server to communicate with. As such, the
servers have not been stitched together into a truly distributed,
coordinated global service.
The true distribution of a metadata service is a subject of the
second part of our architecture, known as the convergence model.
The convergence model is used for replication of metadata from one
server to another, and is also the basis for knowledge management
and other metadata schema and discovery issues.
Finally, while we have put in hooks for namespace administration in
the Blocks protocol (see the Blocks eXtensible eXchange
Service[14]), we have also deferred further specification of those
issues until more operational experience has been gained. In
particular, a mechanism for distributed management of the namespace
is dependent on the infrastructure for both knowledge management and
Malamud & Rose Expires September 30, 2000 [Page 14]
.
Internet-Draft Metadata April 2000
bulk replication. In the time-honored tradition of hosts.txt, we
thus manually administer the namespace until a better solution is
necessary.
Malamud & Rose Expires September 30, 2000 [Page 15]
.
Internet-Draft Metadata April 2000
References
[1] Rose, M.T. and C. Malamud, "Blocks: Architectural Precepts",
draft-mrose-blocks-architecture-01 (work in progress), March
2000.
[2] Rose, M.T. and C. Malamud, "Principles of Operation for the
TPC.INT Subdomain: Remote Printing -- Technical Procedures",
RFC 1528, October 1993.
[3] Rose, M.T., "The Simple Book: An Introduction to Internet
Management, Revised Second Edition", March 2000.
[4] Handley, M. and V. Jacobson, "SDP: Session Description
Protocol", RFC 2327, April 1998.
[5] Berners-Lee, T. and R. Swick, "Frequently Asked Questions About
RDF", W3C RDFFAQ, September 1999, .
[6] Berners-Lee, T., "W3C Data Formats", W3C RDFARCH, October 1997,
.
[7] Bray, T., Hollander, D. and A. Layman, "Namespaces in XML", W3C
XMLNAMESPACES, January 1999,
.
[8] Miller, E., "An Introduction to the Resource Description
Framework, D-Lib Magazine", May 1998,
.
[9] Dawson, F., "The vCard v3.0 XML DTD", June 1998,
.
[10] Microsoft, M., "BizTalk (TM) Framework Document Design Guide",
September 1998,
.
[11] Rose, M.T., "The Blocks eXtensible eXchange Protocol",
draft-mrose-blocks-protocol-01 (work in progress), March 2000.
[12] Rose, M.T., "The Blocks Simple Exchange Profile",
draft-mrose-blocks-exchange-01 (work in progress), March 2000.
[13] Rose, M.T., "On the Design of Application Protocols",
draft-mrose-blocks-appldesign-01 (work in progress), March
2000.
[14] Rose, M.T. and M.R. Gazzetta, "Blocks eXtensible eXchange
Malamud & Rose Expires September 30, 2000 [Page 16]
.
Internet-Draft Metadata April 2000
Service", draft-mrose-blocks-service-01 (work in progress),
March 2000.
[15] mailto:blocks-request@invisible.net
[16] http://mappa.mundi.net/
Authors' Addresses
Carl Malamud
Invisible Worlds, Inc.
1179 North McDowell Boulevard
Petaluma, CA 94954-6559
US
Phone: +1 707 789 3700
EMail: carl@invisible.net
URI: http://invisible.net/
Marshall T. Rose
Invisible Worlds, Inc.
1179 North McDowell Boulevard
Petaluma, CA 94954-6559
US
Phone: +1 707 789 3700
EMail: mrose@invisible.net
URI: http://invisible.net/
Malamud & Rose Expires September 30, 2000 [Page 17]
.
Internet-Draft Metadata April 2000
Full Copyright Statement
Copyright (C) The Internet Society (2000). All Rights Reserved.
This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it
or assist in its implementation may be prepared, copied, published
and distributed, in whole or in part, without restriction of any
kind, provided that the above copyright notice and this paragraph
are included on all such copies and derivative works. However, this
document itself may not be modified in any way, such as by removing
the copyright notice or references to the Internet Society or other
Internet organizations, except as needed for the purpose of
developing Internet standards in which case the procedures for
copyrights defined in the Internet Standards process must be
followed, or as required to translate it into languages other than
English.
The limited permissions granted above are perpetual and will not be
revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an
"AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Invisible Worlds expressly disclaims any and all warranties
regarding this contribution including any warranty that (a) this
contribution does not violate the rights of others, (b) the owners,
if any, of other rights in this contribution have been informed of
the rights and permissions granted to IETF herein, and (c) any
required authorizations from such owners have been obtained. This
document and the information contained herein is provided on an "AS
IS" basis and INVISIBLE WORLDS DISCLAIMS ALL WARRANTIES, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE
OFTHE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT WILL INVISIBLE WORLDS BE LIABLE TO ANY OTHER PARTY
INCLUDING THE IETF AND ITS MEMBERS FOR THE COST OF PROCURING
SUBSTITUTE GOODS OR SERVICES, LOST PROFITS, LOSS OF USE, LOSS OF
DATA, OR ANY INCIDENTAL, CONSEQUENTIAL, INDIRECT, OR SPECIAL DAMAGES
WHETHER UNDER CONTRACT, TORT, WARRANTY, OR OTHERWISE, ARISING IN ANY
WAY OUT OF THIS OR ANY OTHER AGREEMENT RELATING TO THIS DOCUMENT,
WHETHER OR NOT SUCH PARTY HAD ADVANCE NOTICE OF THE POSSIBILITY OF
SUCH DAMAGES.
Malamud & Rose Expires September 30, 2000 [Page 18]
.
Internet-Draft Metadata April 2000
Acknowledgement
Funding for the RFC editor function is currently provided by the
Internet Society.
Malamud & Rose Expires September 30, 2000 [Page 19]
.