Collaborative real-time editing of various types of media...

Apr 25, 2007 21:52

[note: I wrote this as the first thing I wrote in ACE, an open-source, cross-platform real-time collaborative editor available from http://ace.sourceforge.net/.]

So perhaps the first usage of a collaborative editor should be the discussion of collaborative editing in the first place. More specifically, the things which need to happen for various other forms of collaborative editing.

1) Text

Text is arguably the easiest. It still maintains metadata about who did what change, but the changes are compact, and the final product is just as compact.

2) Graphics

Historically, graphics have been viewed as "2D cursor location, operation, parameters" sets. This is what allowed OpenCanvas to work so well -- it essentially took a canvas from ground 0 to whatever, using nothing but a bounded set of operations. One of the things that /didn't/ work was that you couldn't load a graphic into the software while in collaborative mode.

This leads to an observation: "in some data/operation sets, data reduction is necessary."

2A) 3D Modelling

In some frameworks, 3D modelling is based around an extension to graphics -- instead of a 2D cursor location, it becomes a 3D cursor location. This realistically doesn't change anything except by slightly increasing the amount of coordinate data in use.

3) Sound

There are two types of sound involved here. One is the pure waveform (the one that is truly 'sound' as we know the term) and the other is the 'performance' (the one that is implemented as MIDI).

The second type has such a compact representation that one could arguably edit it "like" text, though adherence to the structure would be necessary.

The first, though, has a huge amount of data associated with it. Even MP3, which is a "lossy" compression algorithm, reduces the amount of data involved; it doesn't, however, reduce enough to be truly useful in a collaborative context. What is worse, though, is that waveforms with a lot of data removed are simply not appropriate for "finished works".

This implies: "Data reduction is necessary, but the data reduced must not be lost, and must not have its synchronization features removed or changed."

I believe that the next core of the "real-time collaborative" movement will be in the appropriate, temporary reduction of data to give enough context for rapid collaboration.

Another way to view collaboration is this:

In order for collaboration to work, there must be: low barrier to entry, low barrier to participation. There must also be a way to inform the clients of the state of the collaboration, and a way to inform the clients of any changes to the state of the collaboration.

This means:

1) Enough state must be transferred at the beginning of the session to make collaboration meaningful
2) This state transfer must NOT take forever
3) Enough state must be transferred at every state change to make the collaborative environments aware of what has occurred
4) this state transfer must NOT take forever

In some ways, the forebears of the Web had to deal with this as well. Interlaced GIFs, progressive JPGs, progressive PNGs -- they were all means of reducing the amount of data sent in the first and subsequent passes, from the graphics viewpoint.
Previous post Next post
Up