an OS question

Nov 14, 2010 14:04

While waiting for assorted software updates to install today I found myself wondering... Mac OS and Windows usually need to reboot your machine to install updates. Yet I have, several times, seen Unix machines that I believe were being maintained with uptimes of more than a year. What's the deal? Is Unix just better able to support hot-fixes, or ( Read more... )

computers: mac

Leave a comment

Comments 10

richardf8 November 14 2010, 19:37:10 UTC
I think a lot of it has to do with upkeep of the User Interface. Unix machines with long uptimes tend to be servers, and the GUI, if it is running at all, is not much being interacted with. Stopping a service, updating its executable, and restarting it is trivial. However, even on Unix boxes, once the packages involving the GUI get involved things get messy.

The Pretty - it costs.

Reply

cellio November 14 2010, 19:50:58 UTC
Hmm, good point. The only user interface I use on Unix machines these days is the shell. (In the past, also X-Windows.)

Reply


(The comment has been removed)

merle_ November 14 2010, 23:34:28 UTC
I agree. Back in the Linux 0.91 days I was rebooting constantly because I patched the kernel every night. These days, I live with what I have, because the apps are mostly independent (and daily builds were a pain). My work desktop is still running RHEL3. Why upgrade? I only use it for terminal windows, it's behind a firewall behind a VPN behind another firewall.

Reply

geekosaur November 18 2010, 03:37:26 UTC
OSX / Darwin has an additional little feature: it prelinks shared objects so they can be fast-loaded at non-conflicting addresses for most processes. The gotcha is, if a prelinked object is modified and then used before the prelink cache is generated again (as by an install while the system is running), Bad Things happen. (It's been known to require a complete reinstall to recover.) So starting in 10.5 Apple made most installs require reboots, specifically to insure that the prelink cache remains valid ( ... )

Reply


dragonazure November 14 2010, 22:48:02 UTC
Its been a long time since I did any operating systems work, but from what I remember, it largely depends on the type of update. In UNIX, I haven't seen/been involved in a major upgrade in ages, but usually, only services and device drivers get updated and that doesn't require restarting the entire system--just "refreshing" the services. A serious upgrade to the OS kernel will generally require restarting the system. If you have to have to reconfigure your system settings, that also usually requires a restart of the system, but I don't think that is what you are asking....

With Windows, I suspect that certain things are still very closely coupled to the operating system kernel. If I were a little more paranoid/cynical, I might think it is also a sneaky way to mask memory leaks and garbage collection problems.... 8^) To be honest, I do get a lot of Windows "hot-fixes" at work, but I suspect they simply are patches and upgrades to non-core components of the system (or what passses for TSRs these days)

Reply

cellio November 15 2010, 02:23:07 UTC
I get lots of updates at work too (where I am currently using XP), but I usually can't tell which are OS upgrades and which are application-level stuff that our IT department has decided we need. We just get a generic message about updates (and instructions to reboot).

Reply


sethg_prime November 15 2010, 01:19:17 UTC
I don’t know about Mac OS, but aside from what others have said above, there is a specific difference between Windows and Unix-family filesytems that is relevant here.

In Unix filesystems, there is a level of indirection between a file’s name and its inode, which contains things like the ownership, permissions, and pointers to the blocks storing the actual data on the disk. Because of this indirection, one process can open a file, another process can delete it, and the storage the file uses will not actually be freed up until the first process closes it ( ... )

Reply

cellio November 15 2010, 02:27:38 UTC
Oh! I hadn't made the connection about inodes, even though (1) I know what they are and (2) I've been frustrated by that complaint from Windows that some (unnamed, mysterious) process is using a file I want to delete. Yeah, that makes sense!

I'm currently on XP at work, though I understand that Windows 7 will be rolling out next year. I should be good until October, when the lease on my current machine expires, assuming nothing melts down that would require earlier replacement. (My goal is to be among the last to get it, not among the first. That's not specific to Windows 7; for any expensive transition, I want to be able to benefit from what others have learned, 'cause my deadlines aren't going to get pushed out just because I now have to figure out accessibility, security, and just plain usability in a new environment.)

Reply


rjmccall November 15 2010, 09:31:24 UTC
Almost all software updates require changes to code. That code might be in the kernel, in a dynamic library, or in a program binary. The update either has to hot-patch currently running code - more about this later - or it has to shut down everything which has that code loaded. That's impossible for the kernel, of course, and relatively painless for programs unless they're system daemons. Thus the major issue is dynamic libraries. Command-line programs tend to have relatively few dylib dependencies (other than the C/C++ standard libraries), so a box which doesn't run a GUI (or can drop out of its GUI) can usually patch most of the system without needing to technically reboot. A GUI program, on the other hand, tends to have dozens of different dynamic libraries loaded at once - many more on Mac OS, which takes this to extremes - and so it's much easier to just reboot the GUI, which usually means the system ( ... )

Reply

(The comment has been removed)

rjmccall November 16 2010, 18:30:04 UTC
Neat. ksplice looks to be relatively heavyweight compared to MS hotpatching, since it requires briefly halting everything except ksplice; but I doubt it makes much difference in practice; the length of actual downtime should be brief unless it actually needs to update data structures, which is not something MS's hotpatching makes any easier.

Reply


Leave a comment

Up