[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]
Anyone wanna discuss filtering/reputation problem?
- To: <xanadu@xxxxxxxxxx>, <amix@xxxxxxxxxx>
- Subject: Anyone wanna discuss filtering/reputation problem?
- From: Robin Hanson <Hanson@xxxxxxxxxxxxxxxxxxx>
- Date: Thu, 28 Jun 90 16:05 PDT
- Cc: <hanson@xxxxxxxxxxxxxxxxxxx>
"readers will pay attention to material recommended by whomever they
respect" -- Drexler in Engines of Creation
"most users would chose to filter out ... comments by individuals with
poor reputations." -- Lavoie et al. in Market Process
Most of us have said things like this at one time or another. But it's
not clear to me that we have a handle on any better way to deal with the
filtering/reputation problem than to just transplant to existing social
structures of journals, editors, and peer review. Has anyone done any
serious thinking about this lately? I will understand if not, but I
would like to invite whomever is interested in this subject to engage me
in a discussion on it, either in person (you pick the time and place) or
by email (I will cc anyone who asks). My main interest in hypertext
publishing has always been the possibility of doing more than just
automating existing social institutions.
The following is a short summary of the problem, to stimulate
The usual picture imagines your personal software wandering distant
links, collecting items it thinks you might be interested in. But how
does it decide? Well it could use simple correlations between common
local (i.e easy to compute) properties of articles (like length, date,
keywords, associated link types) and what you have liked in the past.
But it's not clear this gets you far, especially since any simple
pattern widely recognized as a guide to quality could become useless as
writers found simple ways to form the pattern without real quality.
We usually come back to imagining that previous readers have given the
item some quality evaluations, which our software can use to filter out
the "garbage". This reduces our problem to evaluating these readers;
there will be fewer people than articles, so we have made progress, but
a huge problem remains.
Let's say an item has had 10-100 readers so far, and each of us has a
personal opinion on the "reputation" of 100-1000 people we have known.
Usually these two sets will not intersect; what do we do? I take this
to be the fundamental "reputation problem" of hypertext publishing.
Here is a list of possible solutions, and some concerns about them.
1) You could count everyone the same. Easy to implement, but could
easily result in National Enquirer sort of quality.
2) You could use simple correlations between people you have liked and
common local properties of them (like age, place of birth, subjects says
is interested in). Not clear this gets you far.
3) The set of articles that these two groups of people (the evaluators
of an article, and the people you know) have read may intersect.
Correlations in what they evaluated could tell you which of the
evaluators to trust more. But a straightforward implementation would
be too expensive to compute. And evaluations will have to be done
in some simple common format for correlations to be computable.
4) Seek the shortest path of "I like A likes B ... likes Z who read
article" and use their evaluation. Might be a bear to compute, and how
useful would it be ten links down?
5) You could not consider "random" articles at all. Start with an article
you like, see who else has liked it, and consider the other articles
they like. Well, this is pretty explosive, and you would really want to
see multiple paths like this leading to the same result to take it
seriously. But whether this would happen in practice really depends on
correlations in reading habits. And how easy is it to compute?
6) People use large reputation groups which are few enough in number
that you are likely to know your opinion of some group which happened to
evaluate someone who has read the article in question. (Or the group
directly evaluates the article.) Groups compete to be liked enough to
get the economies of scale working here. Groups may be progress, but
the core problem remains: how does the group merge the opinions of all
the people who contribute to it in order to evaluate some person or
7) You subsidizes a market so that computational agents will make bets
with each other about what your evaluation would be if you read an
article. The incentives seem right here, but the question remains of
how the agents can compute reasonable estimates.
8) The following mathematical approach has the virtue of being simple
enough to let us analyze its behavior. Each person j publishes a
evaluation magnitude A(i,j) of each other person i they know. For
people they don't know they declare a default, such as 0. Without loss
of generality, we can normalize evaluations so that sum(i,A(i,j)) = 1.
As a substitute for your own (missing) opinion on something, you might
use a linear weighting of other people's opinions, weighted by the how
much you like them: sum(i,Opinion(i)*A(i,j)).
If you don't have a considered weight for the people who have an
considered opinion, you might use a weighted average of what other
people think of them, weighted by what you think of those other people.
In matrix notation this is using A*A instead of A. By continuing this
"what others (that I like) think of others think of ... others think of
the article" trend, we might consider using A*A*A ... or A^infinity.
A^infinity has the virtue of definitely using considered opinions at
some level, and it happens to produce a consensus weight w(i) =
A^infinity(i,any-j) so we can all share in it's computation. This same
consensus comes from the constraint that the consensus weight for
someone should be the weighted average of people's opinion of them,
weighted by those other people's consensus weight. In matrix notation
A*w = w, or w is an eigenvector of A.
Unfortunately simulations I have run indicate that to maximize your
consensus weight, the best strategy is to give yourself a very high
weight. If that is forbidden, join a small mutual-admiration
society. This is reminiscent of current academic behavior, and is
something I worry about in any popularity based reputation system.
The formulation of the problem given above assumes that deciding what a
reader would enjoy reading is the ground everything else is based on.
But in current academia the ground of funding, tenure, etc. is respect.
People write publications to get them evaluated highly by people with
clout, independent of how many people enjoy reading them, and read
mainly to help decide what to write, independent of what they personally
enjoy reading. This suggests that a hypertext reputation system might
have more effect on current academia than the articles and links
Robin Hanson hanson@xxxxxxxxxxxxxxxxxxx (or hanson@xxxxxxxxxxxxxxxxxxxx)
415-604-3361 MS244-17, NASA Ames Research Center, Moffett Field, CA 94035
415-651-7483 47164 Male Terrace, Fremont, CA 94539-7921
"Question Authority -- But Raise Your Hand First"