Inclusive Improv present an UnConference running along side the 2011 International Computer Music Conference (ICMC).
If you're unsure what an unconf is, there's a very agreeable explanation here.
As an antidote to the broadcast medium of the conference/paper format, the Unconf sessions during the Monday and Tuesday afternoons are a chance for informal discussion. Topics are suggested, and voted, on by the Unconference attendees. This allows for fast moving participatory discussion on topics large and small.
Additionally, there will be two unconf gigs on the Sunday and Wednesday nights.
Blog: Today's session was a great start, covering a wide range of topics that roughly began with questions of 'interdisciplinarity' in composing around non-musical ideas (e.g. connections between art and science etc.), moved into questions of mapping/sonification via gesturality, and finally settling on a lively and philosophical discussion of the nature of performance in a digital age. These and other topics will continue across the week, do come along.
Blog: The tuesday session divided into a breakout session on spatialisation and a main group. The spatialisation group was centered around discussion of the SpatDIF specification (see original proposal).
The main group occupied similar territory to Monday, discussing issues of liveness, performance, intentionality, and gesture. There topic of live-coding as performance was especially interesting, provoking serious debate around the nature of performance and encultured audience expectation.
Adrian Freed put together a quick wordcloud.
Discussion revolved around the matter of visual aspects to audio works, and sound in visual works…
There were also two demonstrations of physical inputs;
one by the man with his joystick,
and the other by Richard Hoadley whose paper Sculpture as Musical Interface had been presented in the conference that morning.
10.30pm The Graduate
The Monotron Syndicate, HELOpg and Edges.
10.30pm The Graduate
Two MusicBots and their owner enter a ring on stage and fight. One MusicBot in the left corner (connected to the left speaker), one MusicBot in the right corner (connected to the right speaker). Together the 2 MusicBots start creating a music (simultaneously or alternatively) The winner wins by KO if he is just too strong for his opponent. After 3 rounds of 3 minutes, the judges (the public) decides who's the technical winner.
Idea by Dominic Thibault
Four improvisors with Korg Monotrons ready to light up the world in joy and wonder
A set of improvisations with Monotrons, and anyone else who wants to join in.
Idea by Richard Glover
Proposal for inclusion in the Wednesday concert.
I propose including my composition “The Adventures of Norby” in the Wednesday evening concert. Here are some video clips of this piece.
You can learn more about me and my work at my website: www.keithkirchoff.com
Idea by Keith Kirchoff
As part of one of the unconf evening concerts may I suggest we have an Electroacoustic Karaoke Competition where people perform to there favorite Electroacoustic works. Maybe there could be a prize but perhaps bragging rights would be good enough!!
proposed by Scott Hewitt
I would like to have a discussion and ideally a little experiment of using Git to collaborate with ChucK for livecoding.
Though perhaps a more general version of this would be to discuss the use of version control in livecoding.
We'd like to present the latest iteration of the SpatDIF specifications.
In the course of this spring the specifications have evolved into a white-paper that we would like to discuss and disseminate.
SpatDIF is an ongoing collaborative effort aiming at creating a method, semantic and syntactic, as well as best-practise implementations for spatial audio scene descriptions. The main goal is to present a format that gives the greatest flexibility and interoperability for storing and transmitting data about spatial audio. Other goals also include human-readability, platform- and implementation-independence and fundamental extensibility.
SpatDIF stands for Spatial Sound Description Interchange Format, following in the footsteps of other pre-eminent interchange formats such as AIFF, SDIF and GDIF. Although the acronym implies a file-format it is actually more akin to a syntax for structuring different kinds of audio-scene-related data. Currently it is defined as a set of descriptors for the most important elements and a way of extending the set of descriptors in order to cover various specialized needs.
SpatDIF is based on the idea of describing audio-scenes, where entities are placed, have certain properties, media-resources get allocated to them and the temporal evolution of parameters, as well as appearance and disappearance of elements in time is described.
Furthermore a number of use-cases is presented as well as example scenes in a number of different file- or stream formats. The uses-cases however, are not limited to the audio- scene paradigm alone. The inclusion of a time-independent meta-section as well as the possibility to create extensions in the name-space for almost any type of data-descriptors imaginable makes SpatDIF a versatile way of storing or transmitting information about such fields as spatial audio, acoustics, spatial trajectories, algorithms used in composition and many other aspects. The structure of the SpatDIF descriptor space is built on a hierarchical model more suited to scene-descriptions that stream-based formats such as SDIF and GDIF.
Nils Peters, CNMAT, UC Berkeley
Jan Schacher, ICST, Zurich University of the Arts
Trond Lossius, BEK, Bergen Center of Electronic Artshttp://www.spatdif.org
The 2011 International Computer Music Conference happened at the University of Huddersfield, July 31 — August 5.
The actual conference page website is at www.icmc2011.org.uk.
The ICMC timetable can be found here.