Wednesday, 19 August 2009

Compiling bleeding edge SciPy on Mac OS X

I do most of my number crunching computing task with SciPy these days, having basically kicked the matlab habit with the brief exception of occasional use of legacy libraries. SciPy is a joy to work with, but is a huge pain to build from source, in light of nasty dependancies (fortran things mostly) and some system specific hardware acceleration trickiness. Thankfully most users can download one of many prebuilt packages, perhaps the best being enthought's. If you've ever wanted to see what SciPy is all about, this is the easiest way to do so.

That said, one of the great things about using SciPy instead of matlab is that it's python. Except all the prebuilt binaries (to my knowledge anyway) use at newest python 2.5. Again, probably not a problem for most, but I use the nice socket library (amongst other things) that's been improved significantly in python 2.6. So for a while I had my SciPy python and my everything else python and every so often I would make another attempt building SciPy for py2.6 on my mac to integrate the two and every time it would defeat me.

Until yesterday.

So I'm going to attempt to fully document the process as I've now done it on 2 similar machines (home & lab) and now that I've figured out the tricky bits it seems fairly easy to reproduce. These instruction were followed on 1 - 2 year old intel based macs running 10.5.8. (Note these instructions don't touch on installing the other pieces of standard SciPy setup, ipython and pylab/matplotlib as I've never had much trouble getting these to build. I believe the easy_install process works for both, mostly)

  1. (optional) If you don't want to build universal python modules remove "-arch ppc -arch i386" from the BASECFLAGS and the LDFLAGS in the python library Makefile, which should live somewhere around here: /Library/Frameworks/Python.framework/Versions/Current/lib/python2.6/config/Makefile
  2. If you don't already have xCode 3.1.3 and the associated developer tools you need to get it for apple's custom build of gcc 4.2 (it's not the version that comes with most box copies of 10.5). Download and install a fresh copy of the Apple Developer Tools. You can get SciPy to compile with other variants of gcc 4.2 or greater (from MacPorts for instance) but they don't support apple specific options, which are very helpful in other situations.
  3. Download and install gFortran as a patch to the apple gcc from att research. Why apple doesn't leave gfortran in gcc I don't know, but they don't and we need it. It's critical you use this fortran compiler as other variants of gfortran or g77 seem to cause errors.
  4. Download and install UMFPACK and AMD from SuiteSparse. The easiest way I've gotten through this is to download the entire SuiteSparse and then do the following:
    1. Modifiy the package wide config makefile found at SuiteSparse/UFconfig/UFconfig.mk by uncommenting the Macintosh options (currently lines 299 - 303)
    2. In order to only compile the 2 packages we also need to modify the high level makefile (SuiteSparse/Makefile) by commenting out the references to the other packages under the default call (currently lines 10, 12-17, 19-24).
    3. run make while in the SuiteSparse dir
    4. because it would be too easy if SuiteSparse actually had an install routine, we have to install the just compiled libs ourselves. This is how I did it, though you can stick all these bits wherever you like as long as the python compiler and linker will see them:
      $sudo install UMFPACK/Include/* /usr/local/include/
      $sudo install AMD/Include/* /usr/local/include/
      $sudo install UMFPACK/Lib/* /usr/local/lib/
      $sudo install AMD/Lib/* /usr/local/lib/
      $sudo install UFconfig/UFconfig.h /usr/local/include/
  5. Grab a bleeding edge copy of SciPy and NumPy via their subversion repositories.:
    $svn co http://svn.scipy.org/svn/numpy/trunk numpy-from-svn
    $svn co http://svn.scipy.org/svn/scipy/trunk scipy-from-svn
  6. Build and install NumPy:
    $cd numpy-from-svn
    $sudo python setup.py build --fcompiler=gnu95 install
  7. Test NumPy to make sure it's not broken (note that the tests need to be run out of the build directory):
    $cd ..
    $python -c "import numpy;numpy.test()"

    Make sure numpy doesn't fail any of the tests (known fails and skips are okay) or the next bit may not work.
  8. Similar to step 6, build and install SciPy:
    $cd scipy-from-svn
    $sudo python setup.py config_fc --fcompiler=gfortran install
  9. similar to step 7, move out of the build directory and run the built in tests:
    $cd ..
    $python -c "import scipy;scipy.test()"

    You're going to get some fails and maybe some errors. You're going to have to use your own judgement to as to whether these errors and fails are substantial. Most of the troubles I've encountered are trivial, things like a type being dtype('int32') instead of 'int32' which is actually the same and just needs to be updated to reflect newer numpy.
And now you have a nice SciPy build for whatever flavor of python you're working with on you Mac. Note that I have no idea how well this will work in anything other than python 2.6 on Mac OSX 10.5.8, though it will probably mostly work with other variants. Also, for completeness, I most recently compiled these versions: NumPy-r7303, SciPy-r5893. At some point I'm going to give it a go with python 3.x but that will be a whole new kind of pain I suspect. Anyway, if anyone uses these instructions and they don't quite work or you don't understand part of it, please let me know and I'll try to clarify or help as best I can. I'd really love to build a definitive set of instructions for building SciPy on a Mac, but I can only verify these instructions on my machines.

Tuesday, 14 July 2009

musichackday

So I spent the weekend holed up in the Guardian offices at the musichackday. I went in with some perhaps overly ambition plans to generate playlists across the SoundCloud user graph, with song selection optimization done with features via theechonest. This might have been barely possible if I had been working with a couple other people of similar background, but circumstances led to me hacking mostly solo at this particular event.

In the end I spent a substantial amount of time beating the SoundCloud python wrapper into being more helpful for what I wanted it to do (which is perhaps not what it's envisioned use was, but hey, that's what hacks are for), namely walking the user (artist) space and creating a Complex Network so I can move the playlist generation tools we've created around myspace crawls over to SoundCloud.

So, to that end, I've created some bits of python that walk through the user graph on the SoundCloud and build a graph using iGraph. This code base is living over at a new github repository I've created called pySomethingClever. Included over there are diff files documenting the changes I made to official SoundCloud-api-wrapper, which will enable any willing victims to grab and run the hacky bits of code I have up.

Once I got the api wrapper in a place where it could do a bit of what I wanted I fired off a crawl. I got through about 4,000 users (of a complete user network of about 170k nodes for ~2.3% of the network) in SoundCloud's network before the presentations started on Sunday. To clarify slightly, the network contains all the users of SoundCloud, but only the outlinks (users a given user follows) from 4,000 nodes. This is to say I had a (mostly) complete vertex list and a very incomplete edge list. With the super great help of kurtjx this sampled network was pushed through the lanet k-core decomposition visualization to draw out some of the community structure and related forms of the sample graph. Here's that graph:



The size of each node is tied to the number of links (either direction) touching that node. The color and placement have to do with how critical the node is to the rest of the network maintaining its current state of connectedness.

Since the hack I've continued gather edges toward a complete representation of SoundCloud. I currently have the out link edges from more than 17,000 SoundCloud users (about 10% of the user base) and should have a full capture in the next few days. Below you can see the same visualization with the edges from 16,000 users (the graph is set to write every 2k):




As the crawl continues, my guess is the middle bits will continue to fill in, which would be expected if the SoundCloud behaves in the usual Power Law fashion (as most of The Internet's networks, social or otherwise, tend to).

It should be noted that these visualizations, while very interesting, are just the beginning of what is possible once the whole user network is captured. I'm going to be building some playlist generators and recommenders around this in the coming weeks. If things look good (and from here I'm quite excited) I'll push some of it to the ISMIR late breaking demos and possibly to AdMIRe. More to come!

Friday, 19 June 2009

The Semantic Web and Why You Should Care - a highly informal slidebook

As you may have noticed, my posting has been super light (even for me) for the last few months. Sorry about that. Anyway, the reader may be interested in the presentation I gave at the IEEE/ACM Joint Conference on Digital Libraries, Workshop On Integrating Digital Library Content with Computational Tools and Services this morning (about 4 hours ago actually). These slides really require the presentation audio to make sense and if it was recorded (unsure if it was...) I'll edit the slide share to have the audio track as well. Regardless, here's the slidebook. Have fun.

Friday, 24 April 2009

back to playlisting

After a brief tangent into the wide world of mixing (of which I'll post some more in a bit from my proposed SMC paper in a bit) I'm back into playlist generation and related topics.  Along those lines it occurs to me that readers of my little blog may be interested in perusing my recently completed (Dec 2008 actually) M.Phil to PhD upgrade document.  For those of you unfamiliar with the British PhD system, PhD students start as a Master's of Philosophy by research student, then after about 24 months of independent research go through a process of summarizing and defending their work so far and what they intend to accomplish in the coming 18 - 24 months.  The outcome of this process is one of three things:
  1. Your work is deemed interesting, rigorous and sufficiently likely to succeed in the next couple of years.  As a result, you upgrade to a PhD student and continue on with your research (this is what happened in my case)
  2. You graduate at that point with a M.Phil.
  3. You completely fail your upgrade process.
So that happened back in mid december.  Here's the abstract from my upgrade:
A framework is described to consider various real world playlist use cases.  Automatic playlist generation is introduced as a means to improve music recommendation.  Literature in related topics is discussed.

A sample of the Myspace artist network is examined to investigate the relationship between social connectivity and audio-based similarity.  Audio data from the Myspace artist pages is analyzed using well-established signal-based music information retrieval techniques.  In addition to showing that the Myspace artist network exhibits many of the properties common to social networks, it is seen that there is an ambiguous relationship between audio-based similarity and the social connectivity. Further the Myspace sample is examined with the pairwise relational connectivity measure Minimum cut/Maximum flow.  These values are then compared to a pairwise acoustic Earth Mover's Distance measure and the relationship is discussed.  A means of constructing playlists using the maximum flow value to exploit both the social and acoustic distances is realized.

Two playlist generation methods are proposed for development and experimentation.  The first is a direct extension of the myspace dataset analysis into a robust playlist system for interactive internet radio broadcast.  The second is content based system which uses expert constructed playlists to construct transition models which can then be used on new material.  This is followed by a discussion of evaluation needs and strategies. 
 If you're interested in reading the whole thing (comments welcome and encouraged) download the pdf.

On an unrelated note, I'll be a Yahoo Open Hackday 09 in Covent Garden in a couple weekends.  It's free and I believe there are still some tickets if anyone is interested.  It should be rad.

Thursday, 16 April 2009

Conference alterations...

So this paper I'm trying to put together on mixing algorithms just didn't quite come together in time for DAFx. Or more exactly, the actual research wasn't quite done till like yesterday. But it's not all bad. I'm submitting to Sound and Music Computing 2009 instead, which looks like it'll be an interesting conference. This also gives me a couple more much need days to hash out a few more thoughts on things.
Here's a teaser from the paper:

For the purpose of this discussion we will be dealing with various types of song transitions in order of temporal complexity, from the simplest to describe in time to the most complex.

    Song to song transition types

  • Arbitrary Length Fixed Time Crossfade

  • Phrase-Aligned Start No Tempo Adjust

  • Phrase-Aligned Start, Running Beat Alignment

  • Phrase-Aligned Start, Phrase-Aligned Finish






Once I've got this draft done I'll post some more bits. In the meantime, anyone have any further ideas on core subdivisions of song to song transitions, from the perspective of time alignment?

Friday, 27 March 2009

working toward a rigorous definition of 'continuous mixing'

So I have been thinking about this idea of continuous mixing, or generally speaking, more content aware automatic song to song transitions for some time now. To that end I'm aiming to put something together for DAfX on the topic. As this deadline for submission is fast approaching, my writing is generally focus in that direction. In the meantime however, I'll throw out a vague english definition of continuous mixing so that you the reader can get an idea of where I'm coming from on this topic, with some more rigorous definitions to follow.

continuous mixing: A song to song mixing technique where the goal is to obfuscate the transition between the two songs such that a casual listener cannot immediately pinpoint when the transition occurred. This can involve the use of beat matching/alignment, phrase matching/alignment, content aware equalization and other technical elements as well as sensible song selection with regard to harmony (i.e. key changes that make musical sense).


So that's my starting point. Anyone else have an opinion? What is continuous mixing mean to you? Bonus question! When is continuous mixing an appropriate transition technique in playlist presentation? I'm starting with modern electronic dance music, but these techniques can certainly apply to other musics (speed metal? maybe. free jazz? probably not...)

Wednesday, 25 March 2009

What is 'continuous mixing'?

So as some of you may have noticed iTunes 8.1 just came out. It contains something called iTunes DJ. There are two interesting claims it makes, one of which is good and the other, well, would be good if they did it. As near as i can tell (I've only played around with it for a couple minutes) iTunes DJ is basically Party Shuffle++.

The new thing that's rad is a new remote voting feature. If a bunch of people are at a party and they have iPod touch/iPhones they can vote for tracks to be included the ongoing playlist. This is a great idea generally (though I wonder what the avg house party ipod user % is like...). I've been playing with some other democratically driven social playlists, though mine are a bit less direct, involving voting on the destination of a small chunk of a playlist.

[rant]
The new that's not so rad is this claim about continuous mixing. The splash page states that "iTunes DJ automatically picks songs to make a continuous mix of your music." Since I'm working on automatic continuous mixing that got my attention. But the thing is they don't mean take a few track beat align them, overlap them, phrase align the crossfading etc. Apparently they mean constantly select some tracks using nearly random criteria (seriously, the second track it picked was a chapter from a book on tape that's in my iTunes library. It's even genre labeled ' Books & Spoken' so I don't know what sort of engine thinks that's a good track for the DJ playlist). I realize that at some level this a matter of language. Clearly whoever made the splash page doesn't have the same meaning in mind for 'continuous' that I do, which involves smooth mixing or at least removal of leading and trailing silence. But I guess I just think that if you're going to call a piece of software a DJ it ought to act like one.[/rant]

ISMIR tutorials

So I don't know exactly when this happened, but the ISMIR 2009 folks have published the tutorial schedule.  I of course already knew about mine and the music and the semantic web tutorial presented by Yves, Kurt et al.  The other 2 are 'MIR at the Scale of the Web' by Malcom Slaney and Micheal Casey, which I imagine will be all about using audiodb + LSH with extremely large collections of music.  I'm sure this will be a great tutorial so it's unfortunate that you won't be able to attend this tutorial and mine as they're at the same time.  In the other afternoon session you can attend 'Using Visualizations for Music Discovery' presented by Justin Donaldson and Paul Lamere.  As they both do great work I'm sure it will be an excellent tutorial as well.  Also, as a bonus, the tutorials happen to fall on the day of my birth, so there's that as well.  Kobe is shaping up to be a conference packed full of goodness...

Sunday, 1 February 2009

beware the revolution

I joined twitter.  If anyone wishes to follow my thoughts an extremely terse form, this can be done here:

Also, I added a feed in the lower right on this page.  As you may have already noticed.

Wednesday, 28 January 2009

ah semantics

So I'm fairly certain this has been discussed before, but I'm going to pose the question again.  Music Information Retrieval or something else?  I prefer the term Music Informatics as I find it to be more generally encompassing of the sort of work that get labeled with the more well used term music information retrieval, but perhaps not?

The (re)birth of a properly open artist centric social music platform?

So apparently muxtape has been reborn as  a sort of attempt to do MySpace Music correctly.  It seems to early to tell if this will work out for them, but I'm really  hoping that 'correctly' means a gratuitously open api and quality linking open data compatible rdf.  Though really none of that will matter if they don't get past the critical mass threshold of users/participants.  We'll classify this one as wait and see...

pretty DJs make playlists

or at least I do.  I'm going to be mixing records this evening on Goldsmiths student radio.  From 6-8pm GMT.  Standard fair, drum 'n bass, breakcore, *core (that's a new genre I'm declaring.  The hip kids pronounce it wild card core).  Mixed live, in all its glory.  I'm going to try to be in the freenode irc channel #wiredradio , but this may or may not work out.  Also, if you have an aversion to visiting websites, but like using the internet, the stream is:
 rtsp://owl.gold.ac.uk/Wired 
If you throw that uri into something that can deal with streaming aac (I hope you're happy Richard...) you should be good. Quicktime Player works, so does VLC, as does any player built on ffmpeg.

right.  back to writing up this ISMIR tutorial proposal.