eyeCalendar
eyeCalendar is my WordPress plugin project. It fetches iCalendar format files and merges the contents together, allowing totally custom formatting on the part of the site administrator. Until recently, it featured on the sidebar over yonder → aggregating Hydrogen Economy events with many other Boston events sucked down from Facebook, Going.com, a number of public Google calendars, and Upcoming.
A bug in eyeCalendar resulted in my PHP installation consuming all available CPU on its server. As of now, it’s disabled until I fix the problem. I think the root cause is in the fetch code, but I can’t be sure until I test and test some more. As such the widget is disabled until I fix that problem. Since the project has not seen an actual release, I’m sure nobody actually cares except me! There is one other developer attached to the SourceForge project but he hasn’t done anything.
LoopCollector
LoopCollector is an audio effect inspired by an event described by
The Custodian. It cuts source audio into arbitrarily long chunks and rearranges those chunks to form a rhythmic pattern. I created this project mostly to teach myself how to program AudioUnits and VST plugins.
I began by prototyping the algorithm in a Perl script. A second version of the Perl script followed. Neither was satisfactory. Currently I’m implementing a command line version in C++. These three are all totally dependent on sox to decode and encode audio. The perl scripts open up pipes to sox and I’m pretty sure the C++ version will too. I originally wrote the code so that it could eventually form the basis of both the VST and AudioUnit versions, so I used all manner of wacky C++ template crap so that I could write code that handles floats and ints and chars and shorts without rewriting anything. I’ve scaled back on the wacky templates since I realized that this all a prototype anyway and probably would require massive work to fit into a VST plug-in!
I haven’t got any sound samples yet, because I haven’t had any satisfactory results, but as soon as I do, I will probably post them.
Music
I have radio shows on May 29th (covering NCP part 1 from 7PM to 10PM) and June 5th (Test Pattern the subject matter of which I have not yet decided, so 6PM to 7PM)
Susanna from Rare Frequency is doing a Raster-Noton Test Pattern May 29th as a preview for the Alva Noto/Byetone appearance at Middlesex.
I have some actual ideas for some actual productions bouncing around in my head. I will get them out on some format if it takes me years. The
InfiniteStateMachine series on the creation of the ISM label has only helped fuel my musical aspirations.
fs1rgen
I bought a
Yamaha FS1r a number of years ago and I still don’t have a clue how to program the damn thing. I originally bought as the sound module to a wind controller I never bought (although maybe some day I will purchase one of the new Akai EWI USB units). The front panel is far too tiny for all the options in a single patch, and the only available Mac OS X editor is complicated despite the larger screen. What I have decided to do is use the MIDI implementation described in the manual to create a genetic algorithm of sorts that can generate patches.
Because I haven’t started coding yet, I’m going to write up a design here. I’ll even put it under a cut so you can skip it.
The FS1r user manual has a handy guide that describes all the fields in the programs. The machine’s patches consist of 1-4 individual voices, along with effects, controller, volume, envelope, etc. settings. Individual voices are basically FM instruments, although the FS1r is designed primarily around formant synthesis which allows some really weird voice-type effects. There is a somewhat hidden feature which allows you to make sequences called FSEQs that mutate the formants over time, almost like an arpeggiator. Yamaha never released an editor and the only third-party ones are long since defunct, especially for Mac as this synth was released in the OS 8 days, maybe even System 7.
In any case, now that I know the ranges of all valid parameters, I can create random patches, save them to a MIDI file, then send them straight to the synth.
The plan right now is to implement this as a perl application, just to get the patch generation right. Then, I will rewrite it as a Cocoa app so I can talk directly to the device.
Now, I’ve never done genetic programming or annealing, but since the success criteria are so subjective, what’s it matter?
Anyway, here is what will happen:
- The user selects a number (P) of pre-existing patches (saved as MIDI files) and instructs the script to generate a number (R) of totally random patches. The program will also need the number (G) of patches to generate for each generation of offspring, the number (K) of patches to keep unaltered based on rank, the maximum shift (M) in the index of enumerated parameters, and ranges of performance, voice, and FSEQ banks that are safe to overwrite. These ranges could be as little as 1 performance, 4 voices, and 1 FSEQ.
- The program instructs the user to audition all voices in generation 0 and score them from 1 to 100. These scores are recorded and normalized so that all the scores in a generation will add up to 1.
- For every parameter in every voice, the program will do the following for every new patch it generates for the current generation (G-K):
- For numeric parameters, create a weighted average of the values of that parameter across the members of the previous generation. Split the new offspring between values within 1 standard deviation of the norm and values outside that range.
- For enumerated parameters, the new offspring should have the value set by choosing one value at random from the values amongst the members of the previous generation. Probabilities will be weighted so that higher scoring patches are more likely to have their value chosen. Half of the new offspring will have this parameter shifted in either direction by no more than M.
- If a parameter does not have source data in a parent, the source data should be synthesized with a random value.
- All members of the new generation are scored, and we repeat the process.
- History should be tracked non-destructively, so that the user can at any time invalidate a generation, or decide to carry or drop particular members regardless of score.
- Similarly, the user should be able to lock particular parameters. This could be the way to evolve new FSEQs on a particular patch.
- At any time the user can say “Oh! this is what ! want!” and save off the patch to a safe place.
- The app should probably have a config file that lays out all the parameters, their limits, and dependencies. It will probably be easier to do this than to hardcode everything!
Not sure where to host this project yet. Sourceforge has far too much overhead. Github or Google Code are possibilities. That’s low on the list of worries. A local repo should be fine, especially since I can’t possibly work on this on the train. The FS1r doesn’t take batteries!
crossposted from
The Hydrogen Project