Cube
I can solve a Rubick's Cube without looking at instructions. Some of the moves I “get”; some of them I don't, but now that they are memorized I can watch them carefully and am beginning to understand them. Cool.
Movie
I watched
The Professional. Wow. Awesome. In that, I noticed an actor I hadn't noticed before:
Gary Oldman who plays the
(
Read more... )
First, I completely disagree with your professor's take on whether and how often computers fail. They do fail, and being resilient to failures is important. The time required to react to a failure could easily be as important as the obviousness with which one can observe the failure, especially in the more real-time-programming arenas.
However, I don't think either of these parameters (time to observe failure, severity of failure effect) is strongly influenced by this coding technique. If you allow for data failure at a predictable rate across all bits in the system, I think that the failure modes will be instruction-set dependent, and not tightly correlated to medium-level programming mnemonics such as you've shown above. Planning for failure can be accomplished through watchdogs, protected memory, strong type enforcement, malignant fill of unused memory locations, strongly enforced memory allocation/deallocation, etc., all of which are done in high-reliability applications.
That said, since 90+% of commercial software development does not involve authoring new code, I would posit here that the single most critical, most vital aspect of one's code, apart from correctness, is maintainability. This is the primary yardstick one should be using for software development.
To me, maintainability includes reuseability, modularity, entry point/exit point management, documentation, consistent and rigorous coding guidelines, minimal language feature use, testbench support, and disciplined configuration management.
I would claim that, for a medium-level-language, casting/not strongly-typed language, the most maintainable code fragment is:
/* Comment indicating purpose of loop loop_tag*/
uint32 i = 0;
/* Description of LIMIT_FOR_WHATEVER */
const uint32 LIMIT_FOR_WHATEVER=100;
for ( i = 0; i < LIMIT_FOR_WHATEVER; i++ )
{
properly_indented_stuff;
}
i = 0;
/* loop-end loop_tag */
size_t, as I understand it, is defined as the type returned by sizeof, and as long as you're invoking or the C standard library, you might as well use a more descriptive typedef. If you want an unsigned, 32-bit representation of an int, define it. If you want a platform-derived-sizing of an unsigned int, use unsigned int. I fail to see the benefit of using size_t here, even if your goal is to index into arrays with i.
While I understand the reasoning, I don't think you can say that ++i is never slower than i++ without knowing more details about the compiler, assembler, instruction set, integer execution unit, and scheduler of your target platform/CPU. These factors consider the presence and absence of auto-post-increment and auto-pre-increment hooks in the ISA, register hazard handling, compiler optimization, etc. If i is such a complex data type as to suggest the 'not equal' operator for performance reasons, you have a point -- but that would presumably be pretty infrequent, and even then you're still really trying to express membership in a set (from 0 to n) using a not equals operator, which is awkward.
Regardless, once more, one should be using maintainability as the metric -- and I claim the above code is more maintainable and should be closer to the default.
As to scoping the iterator to the for loop, I am of the opinion that in procedural programming, variables used in a code block should be accurately scoped exactly once, in a common code block, as it enhances maintainability. I see little performance boost to using the stack to declare and destroy a new iterator in the for loop itself. Perhaps from an information-hiding perspective it is "better", but I don't think that's a clear win.
You absolutely should be using an unsigned iterator for for() loops, if that's appropriate to your dataset. Nothing is gained from using a signed iterator unless you're using positive and negative element designators / offsets -- which I would also claim isn't very maintainable.
Reply
Leave a comment