As of January 1, 2016, Gapers Block has ceased publication. The site will remain up in archive form. Please visit Third Coast Review, a new site by several GB alumni.
 Thank you for your readership and contributions over the past 12-plus years. 

TODAY

Saturday, July 21

Gapers Block
Search

Gapers Block on Facebook Gapers Block on Flickr Gapers Block on Twitter The Gapers Block Tumblr


Detour

Sixty years ago, surrealist painter Rene Magritte explored the vagaries of language by replacing images with words. If a picture was worth a thousand words, was the reverse also true? Magritte's word paintings were deceptively simplistic. They typically involved an amorphous white object with rounded edges framed in wood. In this object was painted a single word or description. These wood-framed blobs were carefully painted in a flat style into a simple setting -- usually on a hardwood floor and resting against a bare off-white wall. One of my favorite paintings has the words "Sad Girl" (in French) painted in a kidney-shaped object resting quietly against a bare wall. The idea was that the words that described the girl would supplant the image of the girl.

Of course they didn't. The words only suggested the idea of the sad girl. But the painting, as a whole, didn't depict a sad girl. You were left to your own assumptions about the sad girl; you were left to fill in the blank, so to speak.

These paintings weren't failures. They were careful studies of the limitations of spoken and written language as they applied to the evocation of imagery. The limits of which were even demonstrated in the first paragraph of this article. Unless you were familiar with the imagery -- the actual painting or a similar work by Magritte -- you've only got my vague description of the picture to go on. When comparing the actual work to your mind's image of it based on my description, it's a certainty that your mental picture will differ from that of the real thing.

In the name of specificity, and in an effort to do justice to Magritte's brilliant work, I could spend an the next 3,000 or so words describing for you the colors that Magritte used to paint the wood floor, the tediousness of the woodgrain that he recreates in both the floor and the frame, and the off-white -- almost aged-yellow -- color of the wall. It could get really damned specific -- the unnaturally forced perspective of the painting is evident in the floorboards that narrow, at their sharpest, to a seventy-degree angle. If that's the actual angle of the sharpest visible floorboard. I'm just guessing here.

The point is this: no matter how specific I get, you're still not going to see the image. You have to experience it for yourself, visually, to truly perceive the image.

Why is this important, you ask?

Because we are consumers, and we want to know what to expect when we consume. We all know that time's preciousness is determined by the very fact that we experience two points of reference that determine our time on this earth. There are countless adages that reinforce the notion that time should not be squandered, that life is precious, and that we should lead rich, productive lives. These concepts aren't new to us. We've always been consumers of one fashion or another. Of wheat and work. Of power and influence. Of sex and raw eros. We are visceral beings. We thrive on life. And artworks of any kind are a rich facet of the actions we collectively call "our lives."

(Did you notice that, in that writing, you understood exactly what I was talking about? Conceptually-speaking? Language functions beautifully, with great specificity, in describing the transactions of action and ideas. Just something to think about as you read this article.)

I recently picked up The Decemberists' record Castaways and Cutouts. I'm a huge fan of the band Neutral Milk Hotel and the comparisons between The Decemberists and the Neutral Milk Hotel excited me. I've experienced a deep craving for new or similar material since the NMH fizzled after The Aeroplane over the Sea. Could The Decemberists be my answer?

Sadly, no. No new Nuetral Milk Hotel fix for yours truly. Aside of from The Decemberists' greatness (and they are great), they're nothing like the Neutral Milk Hotel. What the hell went wrong, then? Why was I so misled?

In a word: ineptitude. The criticism I'd read everywhere from PichforkMedia.com and Spin to the blurbs posted on the record bins at Reckless Records failed in their attempts to impart on me a taste of the music I was about to consume. This time, I didn't leave disappointed. But I've been so in recent weeks: the new Basement Jaxx record has left me out 15 bucks. And in spite of favorable reviews, I wasn't a big fan of Cursive's most recent outing. Ditto for Dido. Am I going to force you to agree with me that these records are bad? No. Not really. Partly because they're not that bad. And mostly because it's just my experience with the music. Yours could be totally different.

I'm pretty diplomatic that way.

Most music reviewers aren't.

02062004_number1.jpg

Eighteen months ago, Gapers' Block editor Andrew Huff and I made a cross-country trek to deliver me, my cat, and my cargo to beautiful San Diego, California. While on the stretch of highway connecting Springfield, Illinois to St. Louis, Missouri, Mr. Huff and I discussed the state of popular music criticism. Topics ranged from our favorite bands and records at the time (his: Twilight Singers' debut album; mine: Beck's Sea Change) to the very nature of criticism itself. Our discussions led to the creation of our Rock Crit blog, which we've been ignoring hard-core for the last few months. It started out with such great intentions. It just didn't go anywhere. Like a lot of da Vinci's work. (Dare I compare Andrew and myself to da Vinci? Why the hell not? Let's be ambitious, shall we?)

But to return to our topic: the intent of our blog was to explore the limitations of music criticism as it stands today and to suggest solutions to the dilemma that music critics and consumers of music criticism are faced with. That dilemma is three-fold: helping consumers to make quality music purchase decisions that fit their tastes and interests; educating a listening audience in the language of music crit's expression and opinions; and honing the craft of music crit as an art form in and of itself.

Contemporary music crit has succeeded only in the latter. I don't know about you, but I love reading record reviews. They seem so erudite, so witty, so carefully crafted to impart� what? Well, in a word, cleverness. The author wants to show you just how witty, well-learned, and clever he or she truly is. That's all well and good -- art for art's sake and all of that -- but it's utter bullshit when it comes to helping consumers make decisions and educating a listening audience in the descriptive language that successfully describes the music the reviewer is attempting to review. In the first case, I'll tell you what I as a consumer want: I want to know if the new Ani Difranco record is worth listening to. Let's face it: her output has been uneven in recent years. I love her voice and think she's a talented songwriter and brilliant lyricist. There've just been a few of her records that I've wished I would've skipped rather than shelling out 15 bucks for that I would later sell at Disk-Go-Round for five. And I don't want to take a 66 percent loss in this particular music-purchasing gamble. What I want my music criticism to tell me is whether or not I'll like it.

But music criticism can't do that. A review of Difranco's new record will probably draw comparisons to previous outings (in this case, her earlier work, because on this recording she's getting back to her roots -- it's just her and her guitar, after all), explore the influences on her current "sound" and lyricism, and make some kind of statement of the record's place in both her oeuvre and in the canon of popular/jazzy/rockish folk music as a whole. But would that tell me anything that'll assist me in my purchase? Hell no. And why is that? Well, there are two reasons.

The first is obvious: record reviewers are either writing to a broad audience or to a narrow audience. In either case, the review probably isn't aimed specifically at you as a consumer and, as such, isn't necessarily going to persuade you in one direction or the other. Granted, a reviewer can't crank out reviews suited to every individual. But neither the review aimed at a general audience nor the narrow one is necessarily more helpful than the other. Broad audience reviews have obvious limitations: the author is writing to the lowest common denominator and attempting to describe a record to everyone in terms that are easily understandable by all. This leaves individual readers with a vague sense of what the record might sound like and is about, but it doesn't get them any closer than that.

Narrow reviews hold more promise, but they still get it wrong. They certainly have the virtue of aiming themselves at a more specific, probably more knowledgeable audience (at least in terms of the artist of genre) than the big reviews do. And by that, they are actually helping a select few make some purchasing decisions. But the greater population gets left behind in the shuffle.

I know, I know, I just assaulted broad-audience reviews for being too general and not very helpful to the individual. But narrow reviews aren't helpful to most individuals, and here's why: a reader has to go hunting through multiple publications to find a review that's aimed at him or her. And it's not like every publication is going to be one that any specific reader will implicitly trust. As a rule, I tend to be a PitchforkMedia.com reader; however, Pitchfork leans toward the indie rock/emo-punk end of the spectrum and as such regularly pans such records as Nine Inch Nails the Fragile and Tool's Lateralus, both of which I enjoy very much. So, no one publication's reviews can be the panacea for any one reader.

(There is a solution out there: MetaCritic.com compiles music reviews from various publications and scores and references them for easy comparison and consumption. It's worth checking out -- if you have time to invest in reading 20 reviews looking for the one that applies to you.)

The second reason music reviews won't help me make a music purchase is more complicated and points to my earlier assertion that one of the aims of music criticism is to educate its audience in the language used to describe music as an art form. Remember Magritte? Paintings and sculpture are easier for us to describe because the language we use to describe action and ideas contains words that are easily transferable to the description of lines, color, and composition that make up these art objects.

As nebulous as the concept of composition is as it applies to painting, most of us haven't the foggiest notion of what composition means when it comes to describing a song or a "concept record." While the language we reserve for the visual arts can't fully communicate the seeing of those works, the same language falls on its ass when describing music. That makes sense -- the concept of line, for example, between the two is radically different. Moreover, music is just a totally different animal. As such, we need a separate language to describe it.

The good news is that a language already exists that describes music. The bad news is that contemporary listeners and, I suspect, reviewers haven't a grasp on that language at all. I sure don't. And even if a reviewer correctly describes a work of music in terms of its codas, refrains, verses, arpeggios, and blah blah blah, would you as a consumer know what the hell he or she was talking about?

Even the most musically literate consumer is still screwed by the state of music criticism today. Again, most music critics are as deaf to the language of music analysis as the average Joe/Sally is. There is no musical equivalent to the educational process that your typical art critic undergoes -- or, if there is, most music critics don't go through it (with the probable exception of most classical and jazz critics). Most music critics are fans, primarily, and their enthusiasm or lack thereof is more evident in their work than any discussion of structure, meter, tone, etc. Consequently, the average rock review comes off like the excited and/or snide writings found in an erudite college student's journal -- essentially telling a reader in ornate terms that he or she thinks something rocks or sucks. It all boils down to this: music reviewers are even less successful in evoking the very sound of a piece of music than an art reviewer is in evoking the visual elements of the work they're reviewing.

Music magazines tacitly acknowledge this limitation by resorting to a numbered, barometric scale of music's quality. You know the ones: Rolling Stone's five star scale; Spin's ten point scale; and the even more granular decimal scale used by Pitchfork. The idea of these scales is that they're supposed to enhance the reviews and provide you, the consumer, a better sense of whether or not you're going to like the music you purchase. Like every other aspect of music criticism, though, they falls flat on their faces.

I resort to an allusion to Dead Poets' Society here. Remember the J. Evans Pritchard scale for measuring a poem's worth? The good doctor asserted that you could plot a poem's perfection and importance on the x- and y- coordinates of a graph. The total area achieved by the evaluation revealed the poem's overall greatness. I don't remember if Robin Williams decried it as hokum or horse puckey, but it stinks of bullshit to me.

The same can be said of the numeric scales used to evaluate music. Lets take the example of RS's five-star rating system. Anybody remember what it used to mean? Four stars were reserved for the very best in music. That's right: four stars. And what did RS reviewers award five stars to? Classics, or records so unfathomably brilliant as to be instant classics. In spite of the fact that RS's current incarnation has totally lost any semblance of perspective in its rating system (how many five-star ratings are awarded per issue these days, anyway?), even when it was good, the rating system still didn't help most consumers make educated decisions about any record purchases other than those of the most popular of pop music. The benchmark was literally set by the Bob Dylan, the Rolling Stones, Bruce Springstein, and U2. And lets face it: the Stones haven't had a great album in decades and U2 jumped the shark during the first leg of the Zoo TV tour. So stop awarding them five-freakin' stars already. Bono could sneeze and it would trigger an excited blip on the RS radar.

But I digress. It's not just the five star ratings that are skewed in the Rolling Stone rating systems, either. A disproportionate number of three and four star reviews are thrown at consumers on an issue by issue basis. At first blush, a reader/consumer might assume that RS just doesn't publish reviews of shitty records. In some cases, this is probably true. But read many of the three and four star reviews that have made it into the hallowed pages of Rolling Stone more closely and you'll discover something troubling. While a record might be rated a three, the actual review of it seems to indicate that the record is mediocre -- at best a one star album.

02062004_scores.jpg

The discrepancy between the review and the rating is alarming -- one that has an obvious explanation. The editor sets the star-rating for an individual record independent of the critic's review. The lowest common denominator, here, is looking primarily at the stars awarded to a record and probably doesn't even take the time to read the review. This is what the industry-driven publisher is after: sales driven by positive record analysis. But this compromises those consumers who want more from their music than the latest Janet Jackson single. The consumer wants to know what to take away from a music review whose barometric scoring says the record's a worthwhile investment while the written portion indicates that it should sold in the discount bins at Tower. The consumer is hungry to find good music. And mom is saying one thing about a record while dad is saying another. Here the rating/written review marriage fails.

In lieu of educated, educational descriptions of the sound of a passage of music, most music reviewers resort to two half-assed methods of music description. The first involves describing the qualities of the instruments and sounds that make up a song or record. How many times have you read the phrases "searing/spiraling guitars," "kinetic drums," and "thundering bass" (or similar) in a record review? What, exactly, do those mean, and what do they conjure in your mind? Do you know exactly what a �thundering bass line� sounds like, especially when the term is applied to everyone from Flea to Geddy Lee?

The second is to employ our knowledge of other bands' music to describe the band-in-question's music -- in terms of sound, songwriting, and style. Reviewers often resort to this tactic in order to come closer to defining a record's sound, that elusive quality that, hopefully, sets the band apart from all others. Liz Phair's recent outing could be compared to the last few records by Sheryl Crow. Radiohead's Hail to the Theif has been compared by Spin authors to the band's earlier works, having been called a culmination of everything from the Bends to Amnesiac. And let's return, shall we, to the comparisons drawn between the Neutral Milk Hotel and my recent purchase, The Decemberists. While these may be valid comparisons for the reviewer, the reader may not know the bands the reviewer is associating the band-in-question with. What's more, the comparisons a reviewer contrives may not make any sense at all. A reviewer may hear even the slightest something in the music that just happened to trigger the thought of another band for him or her. But does that mean that the same passage of music will do the same for you? I've read reviews of Make Up the Breakdown by Hot Hot Heat that suggested the band exhibits hints of The Cure, but I don't hear that at all.

Finally, we return to the one aspect of music criticism that is successful: cultivating music crit as an artform in and of itself. The best in music criticism is usually well-written stuff. It celebrates its author's grasp and knowledge of both popular and obscure music. And, most importantly, it's really persuasive stuff.

That's right: persuasion. Music reviews are, at their core, all about persuasion. But not in the light that you might think. One assumes that music reviews are designed to persuade a reviewer to make a purchase. But that's not really the case. Music crit is meant to persuade you that the critic's tastes are right and important and therefore you should base your purchases on their recommendations. That's harsh medicine, I know, but we can't reform music criticism as it stands today without being honest with ourselves about what it is and where it fails.

Music criticism fails because critics and their listening audience lack the necessary common vocabulary to describe and comprehend music. Music crit fails because as an artform it's totally self-obsessed. And music crit fails because it generally doesn't give listeners an accurate taste of what's in store for them if they do purchase the music in question, so it doesn't adequately help them make purchasing decisions that reflect their tastes and interests. (In fairness: services like Amazon.com that offer musical snippets for the music consumer to sample definitely assist the written reviews that accompany said music).

In short, the state of contemporary music criticism is that it's a complete fucking failure.

So what's a reform-minded critic of music criticism to do?

Well, the best possible solution would be for both reviews and readers of rock criticism to learn the language of music. I'm talking notes, melody, song structure, volume, tone, quality of playing, and more here, folks. Of course, there are serious limitations with this solution. The best option would be for music education to be offered as aggressively as fine arts are from grade school on -- by which I don't mean just instrument lessons but also actual music theory and appreciation. That way consumers and reviewers would have the same level of competency with regards to music terminology. (Not that they don't right now: collectively, we don't know jack.) But that's not going to happen under the No Child Left Behind era of education.

Another option might be for music critics to embrace point number two of the above discussion: educate the listening public on the language of music. I'm afraid that's not going happen, either. Most music publications aren't going to set aside a couple of pages each issue for passages describing tone, structure, and blah blah blah. Their readers are in too much of a hurry to find out if Justified is better than anything Justin put out with N'Sync (and it is) -- most of them are in so much of a hurry that they aren't even bothering with the written reviews anyway and are instead looking solely at the ratings. Advertisers want to pay for content that will be read by consumers. Engaging, sound-bite sized content ensures that short-attention-span readers will keep coming back for more. So that's what music magazines publish. Witness the rise of Blender, with its highly tauted 100-plus reviews per issue.

So that blows the whole educate the masses plan.

What, then?

Well, assuming that music criticism as a written art form is going to stay the same as I concluded earlier (and it probably will), we could reform the numeric rating systems that accompany the articles. That's right, the very barometric tool I dissed royally a few hundred words back. How, you might ask, would we go about doing this? And how might we trust that a modified music rating scale might be an improvement over the status quo?

Setting the bar appropriately is one way of giving a music consumer a sense that a barometric rating scale reflects their tastes. If Spin were to say, "a seven is about where the White Stripes' Elephant falls and a ten is where any Radiohead record falls," you might get some idea of how their ratings scale is benchmarked. But that presupposes that you are familiar with the White Stripes and Radiohead. If you aren't, you're out in the cold.

Believe it or not, I'm kind of a fan of Amazon.com's customer rating scale -- in principle. Granted, every schmuck in the world has a voice in reviewing a record on Amazon or a similar eOutlet. And those schmucks all get to assign said record a star rating. But it's highly democratic. Everybody's voice is heard.

There are a couple of problems with the system, though. Firstly, allowing the star rating to be driven solely by consumer reviews is a real problem because there are only two types of consumers who go out of their way to write a record reviews on Amazon: those who really liked the record and those who couldn't stand it. Both feel strongly enough to share their feelings on the subject. And both severely skew the scale of the review -- mathematically speaking, there isn't a true middle rating in the Amazon scale. There are tons of positive reviews. There are a couple of extremely negative reviews. But there are almost no reviews in the middle ground. Consequently, the median rating is severely skewed in either highly subjective direction along the numeric scale.

Secondly, there's the element of time. In order to get a number of record reviews under the Amazon belt, at least five and probably 20 people need to take a risk on that record and buy it either without prior indication of the record's contents or with some knowledge garnered from standard rock criticism -- and you know how I feel about that. Also, music crit strives to be written in a timeframe contemporary to the release date of the record. It needs to be fresh if it's to mean something. And how many of Amazon's consumers actually get to listen to the record prior to its official release date?

02062004_amazon.jpg

But I still have faith in a modified consumer rating system. So here's my proposal.

Create a stable of volunteer consumer reviewers and ask them to review something like two or three records a month in exchange for five free CDs of their choosing. (That sort of compensation is no sweat off a major label's back.) Then, for each record to be reviewed, select 20 volunteers from your stable, send them advance copies, and then get their numeric ratings. Average your consumer ratings. And voila.

Which ratings scale would I propose that we use in conjunction with the volunteer consumer review process? Well, I'm opposed to the Rolling Stone five star method because it's too broad, too easily misinterpreted, and not sensitive enough to really rate a record in any meaningful way. On the other end of the spectrum, Pitchfork's 10.0 decimal scale is probably too fine-grained. It's one thing to say that anything that falls between a 9.5 and a 9.9 is spectacular, but why do we care whether it's a 9.7 or a 9.8? Does that level of detail actually help us in making our music purchasing decisions? No, but fortunately it doesn't hinder us, either.

I'm thinking that a 10 point scale, such as Spin's, probably offers the best of all worlds here. It's sensitive enough that there are distinctions between the ugly, the bad, the good, the great, and the utterly amazing. But it's not so granular that we lose track of those distinctions, either. Moreover, a 10 point scale is easier to use in terms of aggregating reviewers' scores into a composite review.

Those of you with mathematic inclinations might ask me which averaging method would be best-suited for building a composite review -- mean, median, or mode. In this case, I'd recommend a strict mean (i.e., simple) average -- totaling the reviewer's ratings and then dividing those by the number of reviewers to reach a composite score. Round up or down to the nearest whole number and you've got yourself a rating.

Admittedly, such a scale is not perfect. It only gives a general idea of what people thought of the record. And like any statistical sampling, it's greatly influenced by the qualities of the population that reviews the record. If the randomly or even non-randomly selected 20 reviewers were all fans of similar genres of music -- let's say indie rock -- you're not likely to get a positive review of the latest speed metal record. And that's not very helpful to speed metal aficionados.

We could also open the ratings system up further. Once the original review and ratings are published, we could (and probably should) open the review up to additional consumers and allow them to positively and negatively influence the record's rating along the numeric scale. That, combined with their individually written reviews, might give us an even better indication of the likability of the record.

But even that fails. The consumers still need to take a risk on the record and assume that the modified rating is an adequate reflection of their own tastes. They have to care enough about the resulting purchase to write about and rate it. They have to do a ton of legwork and reading to process the music reviews/ratings prior to making that purchasing decision. And such an improved rating lacks the timeliness of a review that is contemporary to the release of the record.

So, bring in the clowns. It's time to bring the professionals back into the mix. For better or for worse, the pros offer a more refined perspective in the review of music, if only by virtue of the fact that they've just plain listened to a lot more of it than you and I have. And whether or not their reviews are truly helpful, there is a certain degree of trust that is placed on the assumed authority of the professional reviewer.

To make this work, we have to take the collective opinions of the music press -- such as the composites offered on MetaCritic.com -- and factor those into the final number. Give the "professional" reviewer's scoring of the record a certain weight, give the totality of consumer reviews another weight. I'm thinking something like 30 percent and 70 percent respectively, or maybe as high as 45 percent for the pros. We then create a more sophisticated basis for the rating.

Couple the rating with a review that is complimentary to it, and you've got something. The rating shoots for the broader end of the consumer-purchasing spectrum, while the written review narrows the field a little bit and gets into the nitty-gritty of the record, and also preserves the artistry of the written review.

This modified result isn't perfect, but it's a vast improvement over the status quo. Now if only we could work music education into music criticism.

Comments

QPG / February 6, 2004 8:43 AM

Very interesting article. A lot to digest! Very thought provoking. I'd give it a 9.0 and say its definitely a "new classic".
Great article!
Q
(my comments get a 5.3)

Kenan Hebert / February 6, 2004 12:48 PM

I don't agree with everything here, but I like the thinking that went into it. The problems you describe are the same ones I couldn't get my head around when I was trying to write straight record reviews. I finally decided that making mixes and posting songs on my website was much more useful and rewarding. When I see that someone has downloaded a Bark Psychosis track from me, I am happy and a little relieved, since I would have had a hell of a time describing it.

Craig / February 6, 2004 1:16 PM

This article is almost a parody of itself! Not only do I feel like you have completely missed the point of the existence of "criticism", but you did so in an over-analytical manner worthy of publication in the most pretentious rock crit rags you contemptously reviewed!

The thesis of this article is what is flawed. A (pop) music review is not meant to be an accurate barometer of how much a record will appeal to you. You hit the nail on the head when you said that the review is about persuasion. You slammed the nail home when you said that the point of the review is to align the reader with the writer's tastes. I read Pitchfork daily. I have a general idea of what each author likes/dislikes, as well as the "musical aesthetic" the publication as a whole tends to support. Through past experience I have found to be in agreement with the vast majority of their reviews, there fore I align myself with their tastes and I am able to use the reviews to gauge whether I should investigate and album or not.

This is not a perfect system, and dammit, it shouldn't be! For example, do develop your opinions on news issues from only one source? I sincerely doubt it-- you see something on TV, your read another viewpoint in print, and maybe you read again on the internet-- and from these varying viewpoints you synthesize your own views.

The same goes for a music review, and critcism as a whole.

You find a source that most closely aligns with your tastes. When a critcism piques your interests, you investigate further be it through research, friends, or sampling the music itself. This cross referencing is the only way to really form an accurate "rating" that applies to you.

Educating the public and reviewers about music theory totally misses the point of a popular music review (and I would even venture to say this applies to "art music" [jazz, classical, etc] too). Music appeals to the listener on a visceral level. Very few people would buy record solely because of the time signature and chords used. Sure that amazing sax riff or technically intricate drumming might impress you, but in the end you love a record for it's visceral qualities. It captures your emotions at the right place and the right time. It challenges you just the right amount-- it's not too easy to love at first but your grow into it. Or maybe it is something fresh that filled a void you never knew existed. None of this is achieved through theory.

However, I do applaud you for ideating upon the concept of the modern review. The days of the high priest critic have been turned upside down by sites like Amazon where the common man is able to leave their thoughts. This new method of critiquing has brought a new dimension into an otherwise one way method of communication. However in the end, the only accurate judge of your own tastes is YOU.

Ok, I've spouted enough. Although I have a opposing opinion on this subject, I am overjoyed to see opinion pieces like this on Gapers Block! Getting a reader reaction is the ultimate reward -- think of it as "critiquing the critic". Have fun.

PS-- I think its for the best that the Decemberists aren't the same as NMH, but that comparison got me to buy two great Decemberists records this past year.

Kenan Hebert / February 6, 2004 1:24 PM

More merciless panning of the article here.

Kenan Hebert / February 6, 2004 1:26 PM

Oh, sorry. Link:

http://ilx.wh3rd.net/thread.php?msgid=4266856

Andrew / February 6, 2004 1:37 PM

Craig, I see your point about music education -- you're right, very few people buy music based on its time signatures. But what I think Brandon was trying to say (and I agree with him) is that music education would provide listeners and reviewers with a common base of knowledge that would allow for more informative music criticism. Flowery language is nice and all, but if it doesn't tell you what something sounds like, it amounts to nothing more than a display of the critic's cleverness.

You're right that the best way to use music reviews is to find a critic with whom you share the most opinions; that's the strategy I use for movies. But most people don't do it for movies or music, and it's a lot harder with music anyway -- far more output, and it's hard to tell if you agree with a critic if you don't buy/listen to everything the critic reviews.

Your method of "cross-referencing" and researching music is admirable, but most people aren't so discerning. Their cross reference is the radio and Rolling Stone, maybe a friend. One radio hit isn't going to tell you if a whole album is good, and RS's reviews are pretty useless, in my opinion. A service that brings hundreds of consumer and professional reviews or ratings together would be a boon to the casual music consumer and a good cross-reference for the more discerning consumer, such as you.

Craig / February 6, 2004 4:09 PM

Flowery language is nice and all, but if it doesn't tell you what something sounds like, it amounts to nothing more than a display of the critic's cleverness.

That reinfirces Brandon's Magrite reference-- words can't emulate the music. Therefore the review's purpose is not to make you "hear" the music-- but to get you interested, so that maybe you'll download some songs, or go to a show, or ask a friend about the band. You can't expect a review, much less a numbered rating, to give you any information on how a piece of music will react with you. If people aren't discerning enough to "cross-reference or research" their musical interests then I don't think they'll be interested in picking up on some musical education either.

This whole debate is built on the assumption that people care about music reviews and use them to discover new music, which is certainly not the case for the majority of America. I would venture to say the sales of vapid dance pop, banal thug jamz, and trite mall punk is based on radio and Mtv airplay almost completely.

This leaves the discerning minority (People who debate this kind of stuff [aka- us]) to read reviews. With the availability of sources like Allmusic.com, iTunes, Amazon/CDnow, File sharing networks, etc. it's become easier to explore music than it has ever been in history. There's no reason to use reviews and critics for much more than to know about the release of new albums, and get a sense of what the record is about.

Now back to my Avril mixtape.

Jen / February 7, 2004 4:53 PM

I agree with this article for the most part. With reviews lately it seems to be a battle of indie cred vs. record industry pay-off. I can't take any of the two seriously because they tend to tell me nothing about the music. Giving a Britney record 4 stars? Rolling Stone. Telling me the Yeah Yeah Yeahs record is half-crap because Karen O and the boys are getting popular and saying that everything Billy Corgan will do from now on is bull shit because he has ego? Pitchfork. I love the Yeah Yeah Yeahs and the Pumpkins, and they tell me nothing about the music. They only tell me how good the music is based on how much the artists get paid and how nice they are. Not that I'm saying Spin and Rolling Stone are sinless (by far, they are often the worst in their own way), but Pitchfork is certainly not the light of the world in reviewing albums. Rolling Stone has been the worst offender in music reviews by rewarding mediocrity or sheer bull shit.

But what both of these institutions of the rock review never seem to look at is if an album is actually enjoyable or not. Multiple reviews called the Strokes to be the new Velvets (bull shit!). Not only did I find it wrong, I found myself insulted on Lou Reed's behalf. I won't get into that in this commentary though. The point is that a reader/listener looks for different elements with reviews/music. I generally look at just about everything, but no matter how well written any review is, it still will not sell me on the Strokes until I hear the damn album. This applies to many albums I buy. I take advantage of the internet mainly to weed through the bad critical reviews. The best part that the article brought up is the painting analogy. You cannot describe a Picasso or a Monet, you've got to see it. In effect, the same thing should be said about music. Yet unfortunately, the business being the way it is, we just have to educate the critics better.

Jeff / February 8, 2004 6:45 PM

Maybe what contemporary rock criticism needs is editors.

Franklin Bruno / February 9, 2004 6:26 AM

Brilliant, just brilliant:

Revelation One: Language is often inadequate as a tool of representation.

Revelation Two: Opinions differ, sometimes confusingly.

Revelation Three: Language is often used to persuade.

Revelation Four: Some critics seem to be doing something other than telling me what to buy.

Revelation Five: "X is like Y" is uniformative to those unfamiliar with X.

Revelation Six: It's very difficult to construct a quantitative scale of aesthetic value.

Reveleation Seven: Beck and The Decembrists are rilly kewl.

Sir, you've really blown the lid off this whole 'music criticism' scam. Sarcasm off: This article isn't 'overanalytical,' as a previous comment claimed. It's just a bad analysis, and a remarkably smug, narrow-minded one at that.

Three semi-serious comments. First, the appeal to an objective 'language of music' immediately raises the question of which characteristics of music are to be valued. Harmonic and structural development, as you suggest, or some other? How will your version of music education deal with harmonically static groove-based music, or improvisation? Will it 'teach' the ignorant that music skillfully performed in real time ('quality of playing') is inherently better than music constructed in the studio? Is social or political context significant? Is pleasure? Is there an asshole's chance in Sodom that listeners and writers with varying backgrounds and assumptions can be 'taught' to agree on these matters? Should they be?

Two: If you can't tell whether a review's underlying assumptions match your own for the purpose of consumer guidance, this may be because the writer hasn't done his or her job well. It may also be because you haven't done your job as a reader well. These are not mutually exclusive.

Three: Power to the people and all, but Amazon reviews are largely written by those with no other outlet and way too much time on their hands. As unsolicited meta-criticism, the present comment should perhaps be dismissed for the second of these reasons, though not the first. (That said, I often find top-ten lists by listeners with specialized knowledge worth looking at. You can tell pretty quickly which ones are, even if you don't already know the music.)

armaghetto / February 10, 2004 5:41 AM

The only music magazine I buy with any frequency is Q from the UK. Not for the criticism, but for the spiffy sample CD they frequently include. Truly, the only music criticism I lend any credence is a mixture of Kazaa and my delete button. And maybe my sister's mix CD's.

I find music magazines & websites to be more helpful as a news source. "Ooh! Beck does vocals on the new (ok, old) Air album!" or "The Postal Service has a Death Cab for Cutie member up in it."

I realize this article is focused on the written word, but in general, I prefer my criticism combined with audio and if possible, visual (i.e. Gamespots streaming video game reviews, or NPR's music criticism). Everything else seems to come off as self-masturbatory, hipper-than-thou BS, even if they do make some very valid observations.

see also: http://www.subpop.com/features/pdork/

"Do not mistake the pointing finger for the moon"

bran / February 10, 2004 7:53 AM

Thanks to everyone who has contributed and will contribute to this dialog so far.

I wanted to throw out that I came to two realizations after this article came to publication. The first is that it's really damned long. The second is that I wished that I had made an additional point in the portion about music language: that a descriptive language is good and all, but it would suffer from the same limitations in describing music that visual language would in describing visual artworks - i.e., it would accomplish its descriptions with greater technical proficiency but we are still left with our own conceptualizations of what the music will sound like. We're still out in the cold.

I agree with armaghetto - the best possible solution in music review is getting a chance to listen to the music. Craig is right: this is where mix CDs, services like iTunes, etc., all come into play. But there's so much out there to sift through in music - just like there's so much out there to sift through in music reviews.

There's no easy answer. Which is why I proposed the solution that I proposed. It's not the end-all-be-all, but it does offer a less skewed/less biased* indication of what people think about said music.

* - "Less biased" only if the reviewers represent a decent sample population. Again, this presents limitations.

 

About the Author(s)

Brandon Heckman writes in Madison, Wisconsin and wants to be a music critic when he grows up. Read more of his work at brandonian.net and the woefully neglected Rock Crit.

GB store

Recently on Detour

A Tragic Day in Chicago
While most people see the weekend after Thanksgiving as a time to begin preparations for the December holiday season, this time of year is a painful reminder to some as the anniversary of one of Chicago's deadliest fires. Ninety-two students...

The Social Life of Our Urban Spaces
"Placemaking" comes to Chicago

Don't '&' Me, OK?
The ampersand gets Wenner thinking about the distinction between race and ethnicity.

Photo Essay: Transitions
Rearview contributers interpret the theme "Transitions."

People from the Rearview Archive
Gapers Block digs into the Rearview archive in search of portraits.

View the complete archive

GB Store

GB Buttons $1.50

GB T-Shirt $12

I ✶ Chi T-Shirts $15