Jun 26, 2012 10:35
Suppose that a device (software or hardware) can be remotely loaded with updates. Many classes of devices such as phones, browsers, video cards, operating systems, antivirus systems, even cars and embedded medical devices can be updated like this. If I recall correctly, the single game 'world of warcraft' supports an ecosystem of plugins in its user interface, such that a single plugin within that system can be remotely updated, just like WoW itself, and the surrounding Windows operating system, and the surrounding virtual machine software, and the underlying Linux operating system. Similarly, Wordpress is analogous to WoW in that it has themes which can be versioned, and have parent and child relationships - a child theme depends on a parent theme depends on Wordpress depends on Apache, PHP and MySQL, depends on Linux depends on some virtualization software depends on some underlying operating system.
Each package management system (and there may be several in these nested matroishka systems) has something like a database (though it may be expressed as a flat file or an ad-hoc machine built of files, directories, config files in various formats including executable config files and symbolic links), which represents "the entity". It's tempting to argue that all of the package management systems ought to be unified and systematized (I love systems).
However, a better strategy might be to say that multiple perspectival maps are fine - a perspectival map is something like the 1976 New Yorker cover 'The world as seen from New Yorks 9th Avenue' showing the pacific ocean as roughly the same size as 10th Avenue. Another perspectival map is the routing table indicating which IP addresses (blocks) can be found beyond which physical interfaces, or a sign in an airport that might have several arrows indicating which way to go for gates C1, C2, C3, and one similarly-sized arrow indicating which way to go to concourses A and B.
A software reliability growth model is something like a system for roughly estimating how likely you are to find a bug in any given interval of use of the system, based on how much the system has been exercised, and how many of the bugs found have been fixed. Note: the software reliability growth models that I've seem abstract very ferociously from the details of the system. They might represent the size of a system as a number of lines of code, and the number of bugs in the system as a simple number. But they are models, after all.
An entity with different modes of operation (e.g. development mode, production mode) might have safeties in place so that it refuses to run at all, or falls back to a more limited set of modules, based on the reliability associated with a package and tracked in the package management system. Some packages might be able to be exercised in production mode and gradually gain reliability even in the field - e.g. "I've run this module, in a sandbox but fed with real data, for a while now and it hasn't crashed the sandbox - do you want to try it without the sandbox?"
Capability security might have something to say about this - and a related point, which is that usually a device is running a submodule (an app or something) on behalf of someone, and their preferences and authorization matter.