robhu pointed this out to me and, unfortunately, it's tripped my "ooh, that looks like fun!" mechanism... a series of puzzles trapped inside a block of data and a virtual machine spec
( Read more... )
I'm writing a C# implementation now that should run at a reasonable speed. In the paper the contest creators published, their C# implementation was only twice as slow as their C implementation (which was 50% slower than the assembly implementation...)
When I've done the C# one I'm going to learn C and do a C one.
C# should be reasonably fast, as Java would be if it were well suited to this task.
Anything hugely faster than a decent C/C++ implementation is unlikely, though... asm will gain a bit, as would JIT optimisation, but they're ridiculously expensive in terms of the associated gains (ie. 1000% more work for 20% more speed)
Lua isn't at all suitable for such tasks, which is why emulator cores are written in asm (the "switch" operator in C/C++ doesn't compile too efficiently).
Of course, since all that we're able to compare is times, I could point out that this machine was built in early 2002... ;)
I ended up giving it a go, also after seeing RobHu's stuff about it. I started off in Haskell, before realising that I'm not good enough with the language, and the task is unsuited for it anyway, and switched to badly-written C++.
It then wasn't that hard, although it took me way longer than it should have to find a stupid operator precedence mistake that prevented loading from working. Sandmark in 2:25, far too much time wasted today playing with UMIX (so far, four logins stolen)
Cool! I've got two passwords so far, and have a good idea where two more will come from.
Am not going to get into a measuring war w.r.t SANDmarks, though, since I'm within a factor of two of the fastest reported results and it seems to be fast enough :)
Yeah, that's about what I'd expect for optimisations.
Depending on what the code's doing, it can be much slower on deallocation (e.g. VC will do integrity checks on the heap every time free() is called when in a debug build), but a factor of 2-3 is about normal for naive vs optimised integer ops.
Comments 17
"Compsci compsci, mohammed jihad."
Reply
Reply
Reply
Reply
I'm writing a C# implementation now that should run at a reasonable speed. In the paper the contest creators published, their C# implementation was only twice as slow as their C implementation (which was 50% slower than the assembly implementation...)
When I've done the C# one I'm going to learn C and do a C one.
Reply
Anything hugely faster than a decent C/C++ implementation is unlikely, though... asm will gain a bit, as would JIT optimisation, but they're ridiculously expensive in terms of the associated gains (ie. 1000% more work for 20% more speed)
Reply
Reply
Lua isn't at all suitable for such tasks, which is why emulator cores are written in asm (the "switch" operator in C/C++ doesn't compile too efficiently).
Of course, since all that we're able to compare is times, I could point out that this machine was built in early 2002... ;)
Reply
It then wasn't that hard, although it took me way longer than it should have to find a stupid operator precedence mistake that prevented loading from working. Sandmark in 2:25, far too much time wasted today playing with UMIX (so far, four logins stolen)
--Edwin
Reply
Am not going to get into a measuring war w.r.t SANDmarks, though, since I'm within a factor of two of the fastest reported results and it seems to be fast enough :)
Reply
I was quite surprised to see how much enabling optimisations helped, though. It took about 7mins without optimisations, and the 2:30 with -O3.
Reply
Depending on what the code's doing, it can be much slower on deallocation (e.g. VC will do integrity checks on the heap every time free() is called when in a debug build), but a factor of 2-3 is about normal for naive vs optimised integer ops.
Reply
Leave a comment