.
Beats Music is the first streaming service that may make me cease regular use of my full-to-the-brim 80 gig iPod classic. I use that dinosaur because I am really picky about my music–by which I mean, I know exactly what I want to listen to, and I want to have easy access when and wherever I may be in the mood to listen to, well, anything from Nicki Minaj to Nitzer Ebb, which are right next to one another in my iPod “Artists” list. I love crass, stupid pop music (“Timber,” anyone?), so it’s not that I’m a snob who wants to have what I think are my elite tastes reaffirmed–I just want to have reliable access to what I do actually like without having to wade through crap I don’t.
So, one thing that I like about Beats is the back catalog. For example, one song I really like to run to is Belgian Techno act LA Style’s 1991 “James Brown Is Dead”–it was hard to access on Spotify (at least back when I tried it out), but really easy to find on Beats.
But the thing that stands out, at least to me, about Beats are the playlists. They are really well-curated and most are even what I would call thoughtful–there was careful consideration not only of which songs to include, but their order and arrangement.
Beats playlists are made by human experts. Unlike a lot of other streaming services, Beats doesn’t just rely on statistical algorithms trained by data. It also relies on people who have already spent a decade or more studying and working in music.. Speaking in the April 2014 issue of Details, Beats CEO Ian Rogers explains this approach: “The algorithm works best when it has human input, and our premise is that you’ve got to start with really expert human input.” Though the Details profile tries a bit too hard to make this into a digital dualism (algorithms vs “somebody there on the other end”), I think this is actually a “mixed reality” approach to curation. Instead of training algorithms, algorithms run on input provided by already highly trained experts. So, for example, somebody–an ethnomusicologist, say–makes a playlist that I listen to a lot because it’s superb (shoutout to whoever compiled the “Best of Dance-Punk,” “Madchester Indie Dance” and “Best of Industrial Rock” lists!); the algorithms then recommend to me new playlists (“Daft Punk Is Playing At My House,” “Intro to Ministry”) based on my listening data. And so far this has produced a consistently good to great listening experience for me.
Now, this raises all sorts of questions about, say, the distinction between expert and menial labor (people doing the ‘expert’ labor, algorithms doing the menial crunching), about the quantifiability of everything that goes into an ethnomusicologists’s training (knowing not just about how music sounds, but music scenes and cultures, about how artists and songs influence subsequent artists and songs, etc.), and even about the quantifiability of taste. We think we can quantify taste–this is what all those recommendation algorithms do. But those algorithms never really work as well as anybody (users or designers) want or hope.
The other innovative thing that the Details profile mentions is feature called “The Sentence”:
you just enter your setting, mood, company, and genre of choice. (Example: “I’m on a rooftop and feel like drifting off to sleep with my bff to pop” delivers J. Cole’s “Land of the Snakes”; choose seminal indie and you hear Belle and Sebastian’s “She’s Losing It.”) “Our goal was to bring the joy of discovery and the joy of having just the right song for the right moment,” says Reznor, who likens the process to scoring scenes in a film.
“The Sentence” uses variables of “setting, mood, company, and genre” to taylor your sonic experience to your ambience. It churns out music that reflects how you feel at any given time–at least, that’s how it should work. I think “The Sentence” is really theoretically or philosophically interesting because it treats music and sound primarily as a matter of feeling, ambience, and ecosystem. “The Sentence” provides the sound so that you can finely-tune your affective experience (how you feel) of any environment. I still need to think more thoroughly and carefully about this “Sentence” feature.
To be honest, “The Sentence” never gets it right for me as a listener. Maybe the options they provide aren’t right for me; maybe I just haven’t figured out how to game the options to produce the outcomes I would like. Or maybe I just don’t think about music in terms of feelings and affect in the first place and would prefer a different sort of interface (which I’m sure is part of my dissatisfaction, actually).
Robin is on twitter as @doctaj.
Comments 3
Atomic Geography — March 28, 2014
Commenting on this for me is an Oliver Sacksian exercise. Brain damaged 20 years ago I went instantly from a musical omnivore to experiencing a kind of pain when listening to music. So much so that I gave it up entirely until recently when I started with trying to desensitize myself to it.
Point being I've thought a lot about what it is to listen to music.
The ambient mode of music doesn't bother me. Watching a movie etc with a soundtrack is fine. I don't know if you are a musician, but musicians' brains "light up" in fMRI's in different ways from non-musicians. That is the neuro-paths of playing light up while musicians listen to music even if its not their instrument. (Don't remember the source for this factoid)
I was a musician and I suspect this is part of the difficulty for me. I think stimulating the actualizing part of the experience is what causes most of the pain for me. Actualizing anything is difficult for me.
So maybe if you are a musician, or if not, if you have taken an unusually participatory approach to how you listen to music (pure speculation here as to the science), I could see where an ambient approach to music selection might be unsatisfying. I also would guess that it would be more difficult to code.
Again, thinking about my own experience, I think pre-brain injury, the sequence of what music I would listen to was based on an enacting mode of experience as much as an affective one. Does this resonate with you?
Boaz — April 4, 2014
Interesting, AG. Sounds like you would accept an automated music choosing system even less than others since the cost of a bad choice is higher for you. Brain damage must be very scary, since I imagine not very much is known about it since there is still so much about the brain that is a mystery.