[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]

Re: Top 10 Trauma



Part of the real disagreement about how to do things is that the first product
has to be the exemplar, it has to show the basics of what we know about
how to use the xanadu technology to solve diverse problems.  Remember
that the code for this will be the standard that most developers will
look at or start from, in developing a xanadu fe.  


	1) The relationship between filtering by link type and endorsement 
	isn't fully satisfactory (or at least it wasn't at the time of 
	the review; dean may have already fixed it). You don't really 
	filter by link type; you filter by endorsement. Since the type 
	and the endorsement are not the same thing, you can get errors 
	and/or intentional abuses. The distinction between the two also 
	raises some difficult issues in the frontend, such as, does the 
	"From" field in the header bar show the endorsement or the edit 
	club? 
I still don't understand how markm & Dean expect this stuff to work without
findlinksfromtoothree.

	2) We need to figure out what it means when people do strange 
	noncontinguous selections: what if the guy selects one headline 
	in the flat view of an inclusion list, then selects a few characters 
	from another headline, plus a few characters from one of the 
	references, plus a whole document reference? What if he pastes 
	this mess into the middle of another headline? Yech! I love Ravi's 
	proposal to not allow it; but for reasons which I won't go into 
	here, that is not quite viable.
Noncontigious selections and multiple ended endsets are the same thing.
I know how this stuff should work for real documents.  I don't understand 
outlines so well though, it might be more interesting there.

	5) Converting the raw information from a Xanadu version comparison 
	into a markup diagram is a problem of unplumbed depths. Every 
	time I think about it, I find more terrible wrinkles. It gets 
	worse when I think about using Xanadu versioning for nontextual 
	objects. I think we may have to build some very sophisticated 
	tools on top of Docs & Links to make it possible for mere mortal 
	programmers to use it without a year of research.
This is the reason I'm pressuring Ted to get us some examples on ParallelTextface.

	

	6) We need to figure out a minimal set of formats .


	7) In using plain old normal text, some people here have a fierce 
	desire to store style information "right", so that people won't 
	get hiccups as they shift from pcs to macs to suns. There seems 
	to be a Xanadu contingent that wants to do this the "right" way 
	even if the "right" way is incompatible with the standard technique 
	used on Macs (and on PCs with Windows, last I saw) for picking 
	a character style off the menu. All kinds of truly intense battles 
	could arise on this terrain. Hugh tells me that significant progress 
	has been made on this since the capabilities review; I sure hope 
	so.
I'm sure we can solve this so that it LOOKs the same as standard mac&PC
even though it isn't really.

	9) There are a number of issues associated with multiple editors 
	for a document (such as, when do you grab the bert? or worse, 
	when do you release the bert?), all of which go away if we make 
	it one editor per document for Release 1 of this particular frontend 
	that was designed for small workgroups.
Sorry, they are necessary to solve, fortunately they will be trivial to code.
This is more of an illusion than a real problem.  You grab the bert when you start
modifying the document.  All the editors are on the same document, unless you 
explicitly make a version. When you are down to one editor the problem is 
reduced to the previously unsolved problem.

	10) Deleting a document is an act of terrible ramifications in 
	a hypermedia system. My favorite subproblem in this lump is, 
	suppose you delete a document which is the only bridge through 
	which connections flow from one part of the information pool 
	to the other? Now you have two disconnected pools; thou shalt 
	never find everything ever again, unless thou ist very lucky. 
	Anyway, I can now see why implementors of hypermedia systems 
	would prefer, for technical reasons, to bar the user from deleting 
	anything, ever. Fortunately, we are wise enough to implement 
	this system for users, not for implementers :-)

This is one reason why we used to have a global numbering scheme, which mapped
into a global index.  The interface in the 88.1 fe was such a kluge that it wasn't
documented or talked about.  With a global index everything is reachable.
Can we have some global index for all documents that have user defined names,
for example, or something like that (just a mild suggestion, not a religious 
conviction).  I've been wondering what markm had in mind to replace this
from the 88.1 stuff.  In 88.1 of course you could just ask for all the documents,
by giving an retrieve requst  finddocscontaining on the entire docuverse,
then construct an index from that.  You wouldnt want to have to do this, at least
not often.  Whats the current equivalent?

	11) Multi-bert endsets: what do they mean, what do you do with 
	them, and how do you survive them in this particular frontend 
	designed for small work groups?
	 
They mean the collection of stuff in the endset.  When you follow a link
with a multi end set you are faced with the choice of ends, or you get them
all (user preference).