1. » differences between practice-based and practice-led research Creativity & Cognition Studios

    Interesting article on the differences between practice-based and practice-led research

  2. Graduation day with some of my BA students. I’d de-robed by this point (too clammy), but it was a great day.
Proud of them all and I’m sure they’ll carry on doing great things.
End of an era.

    Graduation day with some of my BA students. I’d de-robed by this point (too clammy), but it was a great day.

    Proud of them all and I’m sure they’ll carry on doing great things.

    End of an era.

  3. View from the top of the Shard. Kind of mirrors the effect of the G&T’s we sneaked in.

    View from the top of the Shard. Kind of mirrors the effect of the G&T’s we sneaked in.

  4. Model Boat - Sloppy 7’s

    From the archive… I stumbled across this today.  This is the only recording of the last proper (and my favourite) Model Boat song. Recorded in a damp practice room all the way back in summer 2011 it invariably became much faster when we started doing it live. Sloppy as hell and unfinished with none of Jof’s jacked vocals it still rocks every 7 of its beats in a bar.

    Check out modelboat.bandcamp.com for the proper stuff

  5. Here’s a screen shot of a bit of my upcoming piece, Flex, which I’ll be performing next month at Sight, Sound, Space & Play, in Leicester. It’s a musical game using a Brain-Computer Interface as the controller for the sound. The system selects elements of sound and assigns controls in a quasi random fashion for the user to figure out whilst at the same time composing on the fly. The front end is using Integra Live, but the real-time mappings from the Brain data are being handled by Pure Data, which sets the rules and parameters of the game. 
I’m slowly finding all of the fun bugs inside Integra, which range from randomly dropping audio altogether and dodgy built in MIDI functionality. Still it’s an extremely promising platform, and I can’t wait for the developers to open it up for pd coding integration.
One of the good things about Integra Live is the multichannel file support, and the simplicity of controlling surround panning. As the piece I’m building is quadraphonic, the layout of the controls makes things so much easier, and saves much faffing in pd. Still, the interface is very processor heavy; start adding a lot of modules and things begin to slow down quickly, one way to avoid this is to use one module for common ones (i.e. input and output modules), and confine things to as few blocks as possible. I’ll post a vid/screen cast of the piece sometime soon.

    Here’s a screen shot of a bit of my upcoming piece, Flex, which I’ll be performing next month at Sight, Sound, Space & Play, in Leicester. It’s a musical game using a Brain-Computer Interface as the controller for the sound. The system selects elements of sound and assigns controls in a quasi random fashion for the user to figure out whilst at the same time composing on the fly. The front end is using Integra Live, but the real-time mappings from the Brain data are being handled by Pure Data, which sets the rules and parameters of the game. 

    I’m slowly finding all of the fun bugs inside Integra, which range from randomly dropping audio altogether and dodgy built in MIDI functionality. Still it’s an extremely promising platform, and I can’t wait for the developers to open it up for pd coding integration.

    One of the good things about Integra Live is the multichannel file support, and the simplicity of controlling surround panning. As the piece I’m building is quadraphonic, the layout of the controls makes things so much easier, and saves much faffing in pd. Still, the interface is very processor heavy; start adding a lot of modules and things begin to slow down quickly, one way to avoid this is to use one module for common ones (i.e. input and output modules), and confine things to as few blocks as possible. I’ll post a vid/screen cast of the piece sometime soon.

  6. This is Internet Explorer and Friends from the forthcoming Dethscalator album Racial Golf Course No Bitches that I produced. Coming out on Riot Season on limited vinyl.


    I’m not actually sure whether this is the final mixed/mastered version, but you get the idea.

  7. This blog has been eerily quiet of late, although that doesn’t mean I haven’t been up to anything. More things are being finished within the imminent future so here’s some teasers of things to come.

    First up, I’ve been asked to present a keynote paper at this years Sound, Sight, Space and Play (SSSP), at De Montfort Uni, where I used to work many years ago. Here’s the abstract  for the paper:

    Real-time notation through Brain-Computer Music Interfacing

    Introduction

    Brain waves have long been of interest to musicians as a viable means of input to control a musical system. Until recently research has focused on the voluntary control of alpha waves [1] [2], and event-related potentials time locked to stimuli, both of which fall short of explicit real-time control. This paper presents on-going research into utilising EEG techniques from studies in neuroscience in the development of a Brain-Computer Music Interface (BCMI) as a precision controller in composition and performance.

    Meaning in Brain Waves

    Affordable and more portable hardware and faster signal processing has widened access for the development of bespoke BCMI tools, as well as presented fresh obstacles to overcome [3]. Still, when working with electrical signals so minute, complex and highly prone to interference more work is needed to extract meaning within EEG. This paper identifies methods for alleviating these issues and approaches to mapping within BCMI systems.

    Mind Trio

    This paper presents the BCMI performance piece Mind Trio, allowing a BCMI user to conduct a score, presented to a musician in real-time.

    The automated composition process will take a set of pre-composed musical cells, which will continuously change slightly, by means of transpositions, change in tempo, replacement of notes, etc; guided by conductors explicit decisions.

    References 

    1. Ortiz Perez MA, Knapp RB (2009) Biotools: Introducing a Hardware and Software Toolkit for fast implemetation of Biosignals for Musical Applications. Computer Music Modeling and Retrieval. Sense of Sounds: 4th CMMR Copenhagen, Denmark

     

    2. Grierson M, Kiefer C Better Brain Interfacing for the Masses: Progress in Event-Related Potential Detection using Commercial Brain Computer Interfaces. 29th International Conference on Human Factors in Computing Systems, Vancouver, Canada, 2011.

     

    3. Eaton J, Miranda E New Approaches in Brain-Computer Music

    Interfacing: Mapping EEG for Real-Time Musical Control. In: Music, Mind, and Invention Workshop, New Jersey, USA, 2012.

  8. image

    Another NTS. Syncing dropbox with folders instead of having to use the dropbox folder. ln -s <drag source folder here> <drag dropbox folder>. Creates a link in the Db folder to the source.

  9. NTS. MIDI IN/OUT config for PC/MAC setup. For final solution ignore PD on LHS - this is for the one machine demo.

MIDI from Brainbay via MIDIYoke to PD, out via midisport to M4L on Mac

    NTS. MIDI IN/OUT config for PC/MAC setup. For final solution ignore PD on LHS - this is for the one machine demo.

    MIDI from Brainbay via MIDIYoke to PD, out via midisport to M4L on Mac

  10. Radio 4’s PM show feature on Prof. Eduardo Miranda’s new piece, Symphony of the Minds. I ‘just happened’ to be in the ICCMR lab, and got talking about my Brain-Computer Music Interface work.

  11. Open Outcry documentary. The stock market trading floor comes alive in the reality opera

  12. Open Outcry opera featured on BBC Radio 4 Today programme on 17/11/12. I’m gonna post some more about the software development I did for this project sometime soon.

  13. Open Outcry Performance Model V1 from joel eaton on Vimeo.

    A Stock Market Simulation for Open Outcry. Performance Model.

    This is an quick overview of the current iteration of the stock market I’ve modeled for Open Outcry. Actually it’s more of a chance to document and summarise the reams of notes I’ve made into some short description of what the code actually does.

    The market trades for a total of 10 years over 120 periods (months). It has three assets, each with a starting value, an annual expected return (how much the stock with be worth at the end of the 10 years), and an annual volatility (how much the price of the stock can vary over the 10 years).

    The assets are statistically related to each other through a correlation coefficient (-1 to 1). A correlation of +1 implies that the two stocks will move in the same direction 100% of the time. A correlation of -1 implies the two stocks will move in the opposite direction 100% of the time. A correlation of zero implies that the relationship between the stocks is completely random.

    In order to emulate a more realistic market certain climates that affect the value of the stock. periods of boom, stability or decline are tied in with news stories and can be triggered manually or built into a probability matrix as a generative process.

    For example if the market has been in a boom period for a specific amount of time there could be a 20% chance of it staying in this regime, a 40% chance of it moving into decline and a 40 percent chance of it become stable, chosen at random. All of these parameters need to be defined and tested to create a specific type of market.

    Each of these regimes have different parameters that affect the simulation of the market. For example in a boom period the expected annual returns will be larger than in other regimes, and in a stable market the correlation of assets might be more random due to there being little need for all assets moving together in one direction.

    The video here shows the market moving in steps of one month. the market begins in a stable state where regimes are chosen based on probabilities. It then shows the results of a market being pushed into a boom period, then a bust period, then back to normal. For no particular reason I’ve added some recordings of a stream from different locations. Once the model is complete I might use the simulation as a compositional driver, just for fun.

  14. A Wireless Brain-Interface.
This is a screen shot of the Emotiv EPOC using the open source software BrainBay. The graph on the left is displaying the FFT response of my brain waves, which are being stimulated by an interface I coded in pd GEM. This allows me to elicit control over music using a wireless and extremely portable device.
The Emotiv is a pretty noisy interface and unfortunately doesn&#8217;t provide the same precise response as Waverider and g-tec sensors, but this shows a system that can be built using much cheaper equipment and can be done away from the lab. The next step is to integrate this into my Max Score patches. Video to follow (some time, whenevs).

    A Wireless Brain-Interface.

    This is a screen shot of the Emotiv EPOC using the open source software BrainBay. The graph on the left is displaying the FFT response of my brain waves, which are being stimulated by an interface I coded in pd GEM. This allows me to elicit control over music using a wireless and extremely portable device.

    The Emotiv is a pretty noisy interface and unfortunately doesn’t provide the same precise response as Waverider and g-tec sensors, but this shows a system that can be built using much cheaper equipment and can be done away from the lab. The next step is to integrate this into my Max Score patches. Video to follow (some time, whenevs).