I've spent quite a lot of time at work over the past year working with chopping up, segmenting and "chapterising" radio programmes. From Apple introducing chapterised AAC files, to an ID3 standard for doing the same to MP3s, to using the output of the BBC's hard-disk playout system, to the Annotatable Audio project. What all of these have in common is that they are looking into the structure of radio content (and hence other linear media) below the programme level.
Typically, programmes are thought of as the smallest unit of media - you have radio or TV networks, which are formed of schedules for each day, which are formed of programmes. Though with the introduction of DVD s and their chapters media has started to be split further.
I think it's important to start looking at the structure of the radio programmes themselves, if only because of the increased amount of listening to downloaded media files which allows non-linear playback - i.e. skipping, rewinding, repeating, shuffling... And once our programmes have been broken down into smaller chunks people will be able to use them in the ways that they have got used to listening to digital music.
As mentioned above I've been working and thinking around this for a while. For music radio I've used the output of the BBC's hard-disk playout system to split programmes up - in our systems an XML message is generated whenever a new track starts playing out. The playout system currently only covers our non-classical music networks (Radio 1, Radio 2, 6Music and 1Xtra) though hopefully Radio 3 will appear shortly. Matt Biddulph over at hackdiary has written about how he plugged this information into last.fm and also about some of the areas where this doesn't work - such as rarer music that is played out from vinyl or specialist music shows that are pre-recorded in a single chunk.
For speech radio it gets a bit harder and we've taken a number of approaches. The completely manual approach relies on a producer chopping up the programme - Radio 1 do this for the Chris Moyles enhanced podcast using PodcastAV or Garageband. And with the Annotatable Audio project we're looking at whether we could get our audience to chop up and annotate our programmes using a wiki-like model of editing. But I've also been looking at some more automated ways of doing this - for instance we can generate a script or a transcript of a programme. This may come from the original script or from a speech-to-text engine or from a transcription service like castingwords.com (see my review). Once we have that, then depending on the quality and format of it, we can break the programme down into sentences, paragraphs or contributors. For example, this is an extract from the interview that I submitted to castingwords.com:
Mark: And a question which in this country we call a desert island disk question I suppose is: Is there a particular piece that for you has worked out especially well?And I've also experimented, with limited success it must be said, with grouping sentences or paragraphs together by looking at how different segments of text share common keywords.
Steve: Well I wouldn't single out one, but I would single out a sort of a greatest hits
Anyway, once I'd got some information about how the programme is split up I started looking at ways to display this in an interface. Something like the iPod just displays a timeline with chapter marks that you can easily skip between. Or the latest iteration of our Annotatable Audio project uses coloured blocks to indicate the segments.
But I've tried to take this a bit further with some experimental visualisations based on segmented programmes. First I have a tag-based visualisation that I've been calling "Tag Cities" that was built using processing. Originally inspired by discussions with Dan and things like Antoine Bardou-Jacquet's video for Alex Gopher's "The Child" and the searchscapes city visualisation.
It represents a programme as a timeline containing words. Each segment is represented by its tag, scaled to fit the length of the segment (and also shaded to match the size). I'm not sure about the scaling - I feel that maybe this gives too much prominence to longer segments and shorter tags but I like the feel of the adjoining words of varying sizes. I also extended this to use the "most interesting" photos from Flickr that match the tags, they are stepped to provide a greater indication of the segment boundaries.
And in Steven Johnson's excellent "Everything Bad is Good For You" he talks about the ever more complex multi-threaded structure of popular television from Starsky & Hutch through to The Sopranos and includes some nice diagrams like these:
Which leads me onto the nature of time-based media. Inspired by the visualisations mentioned above I sketched this out:
It's showing how a radio programme (Jazz on 3 in this instance) can be broken down into segments - the introduction, a discussion of the artist, interviews and a live session. Each of these segments is further broken down (in green) into the individual interviews and the individual tracks. Time is on the x-axis and "segment" is on the y-axis, each new segment causes a step on that axis just like the Flickr visualisation.
And we can take this a level further in detail:
Here I've added even more segmentation - the interviews are now broken down into the interviewee and into the individual questions and responses. Here's an outline of some of the interview...
14m30s: Jez Nelson talking about Miles' live performancesWe could also take it another level down to the song structure. Popular music tends to have repeating structures of verses and choruses. Jazz has the theme/solos/theme structure plus several common forms such as AABA or the Rhythm changes. Classical music has the various forms of canons or symphonies or the complex self-referential internal structures of Bach. And at XTech I got talking to a linguist who studies disappearing languages and they segment and annotate videos of native speakers into words and even morphemes and phonemes.
15m00s: Jez Nelson introduces Ronald Atkins who was at the gig
15m20s: Jez Nelson asks Ronald how much anticipation there was
15m22s: Ronald Atkins talks about how they had some inkling of what was to come but also surprise at the electric piano, which had hardly been seen before, and how loud Tony Williams was.
Or we could take it a level up to the schedule of programmes...
So what have we got here? There seems to be a self-similar, or fractal, nature of time-based media here. Fractals are common in nature and show a repeating, self-similar structure and there is a similar kind of structure here from schedules to programmes to music and speech. I'm not saying that they are truly fractal or that there are any deeper truths but I think that it may give insights into new ways of navigating and displaying radio and TV programmes.