I have just failed miserably at reading this article, which is a book review of sorts. The book being reviewed is an apparently popular reference for the C programming language
( Read more... )
yes, with great frustration. is it any wonder i went gray early?
i think one thing that saves the C programmer's ass much of the time is the compiler; they've gotten much better with respect to implicit actions. yet sometimes, this same "save" gets you into a bind.
i got my company into big trouble once because i did not properly understand the conversion rules between signed and unsigned ints. but this was in 1994.
i'm thinking that there is a sort of culture-clash going on here. C was written in 1970-mumble... back when virtually all you had was assembler and fortran. if you wanted any kind of significant control over stuff, you used assembler. if you wanted to write device drivers, only assembler could do it. so C was written to make writing assembler less painful yet just as flexible.
nowadays, the kinds of things you'd want to "typically" do as a programmer have changed. most people are writing web pages or database queries, not device drivers. thus the languages that have arisen in the last 20 years are more oriented toward that, where creativity is needed on the content level rather than the mechanism level. also, assembler was much more important in the old days when CPU's were SLOW and memory was tight, so every cycle and every byte counted. these days the network is usually the bottleneck, so we can afford not to worry so much about efficiency of CPU and memory usage.
so much of C's problems lie in it's I/O library; there are actually too many options to twiddle with too few switches. even C++ is usually better, let alone java or perl.
i think one thing that saves the C programmer's ass much of the time is the compiler; they've gotten much better with respect to implicit actions. yet sometimes, this same "save" gets you into a bind.
i got my company into big trouble once because i did not properly understand the conversion rules between signed and unsigned ints. but this was in 1994.
i'm thinking that there is a sort of culture-clash going on here. C was written in 1970-mumble... back when virtually all you had was assembler and fortran. if you wanted any kind of significant control over stuff, you used assembler. if you wanted to write device drivers, only assembler could do it. so C was written to make writing assembler less painful yet just as flexible.
nowadays, the kinds of things you'd want to "typically" do as a programmer have changed. most people are writing web pages or database queries, not device drivers. thus the languages that have arisen in the last 20 years are more oriented toward that, where creativity is needed on the content level rather than the mechanism level. also, assembler was much more important in the old days when CPU's were SLOW and memory was tight, so every cycle and every byte counted. these days the network is usually the bottleneck, so we can afford not to worry so much about efficiency of CPU and memory usage.
so much of C's problems lie in it's I/O library; there are actually too many options to twiddle with too few switches. even C++ is usually better, let alone java or perl.
Reply
Leave a comment