[tech, lj] Distributing LJ

Aug 07, 2007 00:44

Ever since the Strikethrough of '07 -- actually, ever since I realized that LJ was something of an attractive nuisance of basket in which to store eggs, way back when -- I've been thinking about how one would go about turning LJ, the software, from a client/server model to a peer-to-peer model. That is, how to make LJ distributed ( Read more... )

tech, lj

Leave a comment

Comments 53

en_ki August 7 2007, 15:10:08 UTC
I am a sysadmin in large part because I like hacking but don't have the patience to write huge projects. This (~) has been my "never going to get to it" project for years.

So, uh, somebody else should write it. I'll pitch in now and then.

Reply


dpolicar August 7 2007, 17:01:11 UTC
One consequence would, I think, be to skew the dist-LJ community towards the (people who can maintain an LJ-dist install and their friends) crowd. Which is, I guess, OK with me... although there are people I'd probably LJ lose contact with if that happened too sharply.

Reply

siderea August 7 2007, 18:45:14 UTC
Yep.

Reply

siderea August 7 2007, 20:38:51 UTC
Though to be clear, part of the (current events) problem is this trying to solve is that people fleeing LJ are simply moving to other gated communities, not because they want gates, but because nobody is offering an open alternative that also allows you to lock down what you want. This would be an alternative that provides interoperability. I strongly suspect if the hypothetical we who developed an LJdist were to offer the interoperability code to the LJ code base, the GJ, DJ, and other knockoff folks would love it. LJ might even adopt it[*], thereby allowing LJ users to interoperate correctly with LJdist.

[* If hell freezes over. All LJs recent problems are almost definitionally the product of having been taken over by businessmen who fail to understand such decisions well enough to make them reasonably. If LJ were in the hands of people who would decide that was a good idea, we wouldn't be having this conversation, would we?]

Reply


merle_ August 7 2007, 18:50:31 UTC
One of the other problems you have is similar to the problems with torrents: even in "peer to peer" systems there need to be trackers, aggregators, whatever you call them: central servers that will tell you where all the peers are. Each peer can keep its own cache, but there still needs to be a DNS-style way of learning when someone's journal moves servers or IP addresses.

Name allocation would be grim, too, in a distributed system. Too easy for two disconnected peers to choose the same name for themselves. Although OpenID might help with that.

Reply

siderea August 7 2007, 20:23:37 UTC
One of the other problems you have is similar to the problems with torrents: even in "peer to peer" systems there need to be trackers, aggregators, whatever you call them: central servers that will tell you where all the peers are.

No, you don't. You don't need a central directory of email addresses, do you?

You don't need to be told where other people are. If they want you to know, they'll tell you.

But if someone wanted such a central directory, they're welcome to go build one.

Reply

merle_ August 7 2007, 23:35:00 UTC
Email only works because of DNS, which is a form of tracker system. There are thousands (millions?) of DNS servers all over the world. For every domain, one is primary, one is secondary (backup), all others simply maintain caches of domain-to-IP translations that have been requested through them. And there are thirteen (last I checked) "root" DNS servers that are a tertiary sort of backup. It really is centralized. The whole process of registering a domain is telling the root servers where the primary/secondary information sources is. Distributed authorities and caches, but central "hey, I haven't looked for this domain in ages, where is it?" forwarders.

It does look transparent, as if it is peer-to-peer. And most DNS lookups seem to be. But there are still central repositories.

Reply

siderea August 7 2007, 23:39:38 UTC
Ah, yes, this is true. I wasn't planning on solving the DNS-is-Centralized problem, too. Seriously: do you see a plausible threat via DNS which makes avoiding it a priority?

Reply


eichin August 8 2007, 05:56:15 UTC
I think the main reason LJ *has* a network effect (people come here because their friends already are) is the identity part, being able to get comments from "real people" instead of "the scum of the net". Being cheap-and-easy helps too - but I wouldn't be an LJ user at all if a friend of mine hadn't gone "friends-only" before the OpenID support existed. I've already got a half dozen other blogs, most of which had more features to start with. Once I was here, actually posting was as much laziness as anything ( ... )

Reply

siderea August 8 2007, 06:50:41 UTC
If that's not a unique point of view, it suggests that you can separate out "port my journal content elsewhere" from "identify myself to LJ to comment here", and from "having LJ people identify themselves to my site."

The difference between a blog and a social networking app is that the social networking app has a rich, pervasive, user-configurable ACL system.

Reply

ACLs metageek August 8 2007, 13:30:42 UTC
But there's no reason you couldn't have an ACL system on a blog. Most blog software doesn't do it, because people don't think of blogs as social networking; but, given OpenID, it'd be pretty easy to add it.

Reply

Re: ACLs siderea August 8 2007, 16:59:14 UTC
Right.

Why does your comment start with "but"?

Reply


sethg_prime August 8 2007, 19:29:01 UTC
It seems to me that the trick to setting up LJdist would be:

(1) Extend some existing blog software (LJ, WordPress, whatever) so that the author of a blog posting can declare that only certain users--where OpenID is used to determine a user's identity--can read the posting, and that the filtering of what postings a user sees happens not just for the regular Web page, but for the RSS/Atom feed as well. (I.e., if you turn your browser to sidereasjournal.com/atom.xml, you get redirected to an OpenID login page, and you can only see the actual XML after you log in.)

(2) Extend some existing blog-aggregator software (the kind that runs on the desktop, rather than Google Reader or Bloglines, for obvious reasons--Sage, Liferea, whatever) so that you can log into it with OpenID and then it will pass along your OpenID credentials whenever it crawls your list of feeds.

(3) (Bonus!) Do the same thing for some servers and clients that support the Atom Publishing ProtocolFramed in these terms, I think the project is do-able, although I, like ( ... )

Reply

siderea August 8 2007, 20:07:01 UTC
Why wouldn't you use the built-in LJ aggregator, if you're already using LJ's code?

To make a desktop aggregator work like LJ's you would need to extend it to handle cuts and it would have to share ACLs with your site, anyway. (Filters == ACLS on LJ, after all. Now, it could be seen as a feature to break away from LJ's model on that, but if the point of the exercise is to give people independent LJs....)

Reply

sethg_prime August 8 2007, 20:42:20 UTC
I might, but not necessarily.

I've never looked at the LJ code base so I don't know how hard it would be to change from a username-based model to an OpenID-based model for ACLs.

As a proof of concept, it would probably be easier to add OpenID-based filtering to blosxom than LJ, because blosxom is built around a simple Perl CGI script.

Reply

siderea August 8 2007, 20:54:56 UTC
As a proof of concept, it would probably be easier to add OpenID-based filtering to blosxom than LJ, because blosxom is built around a simple Perl CGI script.

You really think so? But then you have to write all the infrastructure for managing those ACLs, which LJ already has. Likewise, I haven't seen the code, and maybe it is all complete spaghetti....

Reply


Leave a comment

Up