Your verification ID is: guDlT7MCuIOFFHSbB3jPFN5QLaQ Big Computing: November 2011

Wednesday, November 16, 2011

Does the NFL realy need Kickoffs?

This year the NFL moved the spot for the Kickoff up five yards and limited the run up for players to five yards as well. The stated goal of this was to reduce injury, but I am not sure I agree with that position. If the goal was to reduce injuries on kickoffs, they would have eliminated the two man wedge. Wedges have been an ongoing problem with football since inception. Players using interlocking wedge formations in the early years of football caused so many injuries and deaths that it threatened the future of the game itself. It was only the intervention of President Theodore Roosevelt himself that saved the game by implementing rules that reduced injuries and deaths while emphasizing sportsmanship. There are lots of Articles on this, but I like this simple one by the Theodore Roosevelt Association which I will reprint here:"
 
President Roosevelt saves the game. . .
Strange as it may seem, high school football, college football, and even the Super Bowl might not exist today if President Theodore Roosevelt had not taken a hand in preserving the game. As originally played on college campuses, the game was extremely rough, including slugging, gang tackling and unsportsmanlike behavior. Quite a number of players died (18 in just the year 1905 alone, with 20 times fewer players than there are today). Interest in becoming a football player was declining!
But Roosevelt saw merit in the game. It built bodies and could build character, a sense of team and never giving up. Ten of the Rough Riders, the soldiers who fought with him in Cuba, gave their occupations as football players when they enlisted in 1898.
So in 1905, President Roosevelt summoned representatives of the Big Three (Harvard, Yale and Princeton, the universities who first played the game and who also set the rules of play) to the White House. In his best table-thumping style, Theodore Roosevelt convinced them that the rules needed to be changed to eliminate the foul play and brutality.
As a result, the American Football Rules Committee was formed and, in 1906, plays designed to open up the game and make it less dangerousd to play were introduced. Some of the changes made included:
  • the introduction of the forward pass,
  • the distance to be gained for a first down increased from five to ten yards,
  • all mass formations and gang tackling were banned.
Football became less dangerous to play, injuries and deaths decreased, and it became more fun to watch.
Adapted from:
The Roosevelt Rough Writer: the newsletter for volunteers in park at Sagamore Hill, Vol 1, Issue 4, Jan. 17, 1998
The NCAA web site, fall 1999, http://www.ncaa.org/about/history.html"

There were real problems with the way football was played in 1905. There were 18 deaths in that single year which is a shocking number considering how few people who actually played the game! The death per year for football has decreased since then to its current rate of around 4 deaths per year. Most of this improvement can be attributed to rule changes and better safety equipment. 

I did not die playing football, and I was never very good. The best thing that can be said about my playing career is that I had really good seats. However, I did get a ton of injuries. My knees are shot, one hip is just not right, and I have spinal issues. I believe most of those issues would not have happened with better coaching early in my career and more rule changes to protect the head and lower body. Improved equipment always helps as well.

I ran across this article about injury rates in the NFL over the last decade. I think this is a disturbing trend because it shows an upward tend injuries in all facets of the game:

"

Sep 11, 2011


Will the New Kickoff Rules Really Reduce Injuries?

The NFL play-by-play reports when players are injured on each play, or at least when the injury stops play so trainers can attend to the injured player. These are far from 100% all injuries suffered in the course of play, but they are the ones that tend to be significant or severe--ACLs, broken bones, separated shoulders, concussions--the kind of things that really worry players, teams, and the league.

With that information in hand, we can see the injury rates for each type of play, including kickoffs.


Injuries are increasing for all types of plays over the last decade. Last season, the injury rate was 1.6% on runs, 1.5% on passes, 1.3% on punts, and 2.0% on kickoffs. The graph illustrates there is something systemic at work increasing injuries at predictably steady rate, or at least increasing the reporting of injuries. Because of the very real concern around the NFL, I'd assume most of the increase is real.

(If I had to guess, the simultaneous near-doubling of injuries on all play types between 2004 and 2005 could be due to an increased effort to report injuries in the play-by-play. But even accounting for that jump, injuries are still steadily on the rise. I also suspect the drop in injuries in 2010 for passes and runs may not be just statistical noise and could be due to enforcement of helmet-to-helmet hits.)

Increasing the number of touchbacks will certainly reduce the number of kickoff injury rates simply by reducing the number of return plays. Needless to say, the fewer the kick returns there are, the fewer the injuries there will be. The question becomes: How much of a reduction can the NFL expect?

It's hard to estimate how many more touchbacks there will be under the new rules. Kickers may kick higher but shorter, or returners may decide to return the ball from deeper in the endzone than in previous years due to the shorter run-up allowed to the coverage team. But there is preseason data to work with. Because of weather factors (temperature is far more important to kick distances than most think) and other considerations, we'll compare the 2010 preseason to the 2011 preseason.

In 2010 the preseason touchback rate was 19.5%, and in 2011 it doubled to 39.4%. That equates to a 24.8% reduction in returnable kicks (60.6% / 80.5% = 75.2%). The NFL can expect a proportional reduction in injuries on kickoffs, reducing the rate from 2.0% to 1.5%. (We'll plan to revisit the actual numbers later this season.)

But what does this mean in real terms? How many injuries will this prevent?

In 2010, there were about 9.5 kickoffs per game, which is consistent with the previous 10 years. So reducing the injury rate by half a percent won't add up to much. Instead of the 51 kickoff injuries in 2010, we might expect about 38 in 2011. Thirteen fewer injuries over 32 teams and 267 games from week one through the Super Bowl. That's a reduction of 0.024 injuries per team per game--imperceptibly small and meaningless in practical terms.

Again, not all injuries are reported in the play-by-play. But even if we stipulate that this estimate is an entire order of magnitude too small, that's still only 0.2 fewer injuries per team per game!

Further, looking back at the graph above, it appears that over the past few years, injury rates on kickoffs are in line with those on run and pass plays. In fact, in 2008 and 2009 the kickoff injury rates were lower than for typical scrimmage plays. Getting rid of the two-minute warning in the first half, a gimmick that only allows extra commercials, would have a similar injury-reducing effect just by reducing the number of pass and run plays.

In my mind, this miniscule reduction in injuries does not justify ruining one of the more exciting plays in the game. The trade off just isn't wise--there are better ways to address injury reduction. Even if the kickoff injuries are significantly reduced this season, whatever factors have been causing injuries in general to increase remain unaddressed. Those are the things the league needs to fix, or else injury rates will be back on the climb.

As it stands today, the entire NFL post-score kabuki dance is unwatchable. First there's an automatic review that could take up to several minutes featuring two beer and two car commercials. Then there's the virtually automatic extra point, the NFL's version of...well, I can't think of anything else in the universe so pointless. Now throw in the touchback, followed by Dennis Leary hocking Ford F150s and a positively terrifying ad for some horror/sado-torture movie that gives every kid under 13 nightmares for the next week, plus one for Cialis and one for whatever lame hour-long drama featuring a tough-cookie hot single mom NYPD detective is going to be cancelled on CBS later this fall. Then it's back to some moron sideline reporter who tells us something we either already knew or could just as easily be relayed through the booth announcers. Then, finally, it's back to the game."

I never believed that the Special Teams plays were the most exciting plays in football nor do I agree that a penalty shot is the most exciting thing in hockey. The essence of football is plays from the line of scrimmage. Special Teams arose simply because teams needed a method to transfer possession  from one team to another. I would be in favor of eliminating Special Teams from the game if it significantly reduced the amount of injuries in football. However, it would not. Changes in rules need to address how these injuries are happening. If the problem is head and neck injuries change the rules to protect the head and neck. If the issue is leg injuries changes the rules to protect players legs. Implement spacing rules on the line, ban double teams, slide blocks and all cutting. Finally develop equipment to address the root cause of injuries. I have not seen research on it, but I have often heard that switching to soft shelled helmets and pads would reduce the impact and therefore the injuries in football.

 

Friday, November 4, 2011

Why NFL teams punt on fourth down instead of going for it.

I must admit the reasons why NFL teams punt on fourth down as opposed to going for it is not a question I have ever look at analytically. For football I become emotional in my desires and decision making. I am an old, and never very good, lineman I want to always go for it on fourth down, and I want to run the ball so I can tee up on somebody. Oddly it appears that coaches make the decision on weather to punt based on their own rules which do not have a relation to an analysis of what options have the best expected outcome.

Dr. Rangaraj Ramanuja talked about this on the radio this morning on academicminute.org. Here is a text of the Radio spot:

We all know that fourth-down plays in football games can make for drama. They are also great occasions for observing organizational decision making. This was the premise that got David Lehman at National University of Singapore, other colleagues, and I interested in the question when are NFL teams more likely to go for it on a fourth-down?

To answer this, we analyzed over 22,000 fourth-down plays from regular season NFL games. Our basic findings: teams rarely go for it. In fact, they went for it in just under 12% of the plays. They were more likely to go for it when they were trailing. Trailing teams were much more likely to go for it later rather than earlier in the game.

As observations about football, none of this is surprising. But as statements about decisions in business organizations, they are quite revealing. The finding that teams chose to punt the ball 88% of the time is significant because as several studies of NFL teams have indicated, purely from a risk-benefit viewpoint, teams should be going for it much more frequently. That they don't supports an important idea in organizational sociology that decisions in organizations are often rule-based actions.

In other words, people make choices not by calculating costs and benefits but by choosing appropriate rules to follow. So, for football teams, the rule seems to be that if it is a fourth-down, punt. This also means that the willingness to go for it is essentially the willingness to deviate from a rule or experiment with non-routine actions. Applied to business organizations that pursue the goal of meeting or beating analysts' expectations at the end of each quarter, our findings suggest that such organizations may be more willing to try something different when they are underperforming and when they are close to an important deadline such as the end of the quarter.
(from WAMC)



There is a lot here from an analytics and human perspective.

First, it shows me that humans and the organizations they build are risk averse. They choose to protect against the downside (punt) rather than try for the upside (go for it). Most of the people I work with fall out of that spectrum because in order to do startups  one simply can not be risk averse. These guys would go for it on fourthand goal from their own one yard line. 

From an analytics prospective the are two areas of great concern. First, people often choose to ignore the finding and recommendation of the Data Scientist. I am all for questioning of an analytical study to determine if it is in fact valid, but after that has been done let the data and analytics speak for themselves and respect their findings. Second organizations usually question what they are doing only when they are doing poorly, and only then do they accept change. Imagine what we could achieve if we were willing to constantly look at better ways to do things, and adopt those things that really were. We could be so much better than we are today.

Thursday, November 3, 2011

If you are going to buy one Rstats book this year buy Parallel R

O'Reilly Media just Released Parallel R by Q. Ethan McCallum and Steve Weston. Here is the blurb from their website about the book and the authors:

Parallel R

R is a wonderful thing, indeed: in recent years this free, open-source product has become a popular toolkit for statistical analysis and programming. Two of R's limitations -- that it is single-threaded and memory-bound -- become especially troublesome in the current era of large-scale data analysis. It's possible to break past these boundaries by putting R on the parallel path. Parallel R will describe how to give R parallel muscle. Coverage will include stalwarts such as snow and multicore, and also newer techniques such as Hadoop and Amazon's cloud computing platform.

Authors
  1. Q. Ethan McCallum

    Q Ethan McCallum is a consultant, writer, and technology enthusiast, though perhaps not in that order. His work has appeared online on The O’Reilly Network and Java.net, and also in print publications such as C/C++ Users Journal, Doctor Dobb’s Journal, and Linux Magazine. In his professional roles, he helps companies to make smart decisions about data and technology.
    View Q. Ethan McCallum's full profile page.
  2. Stephen Weston

    Stephen Weston has been working in high performance and parallelcomputing for over 25 years. He was employed at Scientific Computing Associates in the 90's, working on the Linda programming system, invented by David Gelernter. He was also a founder of Revolution Computing, leading the development of parallel computing packages for R, including nws, foreach, doSNOW, and doMC. He works at Yale University as an HPC Specialist.
    View Stephen Weston's full profile page
I know both of the authors, and they are great guys. I have worked with Steve Weston for years, and if you buy one book this year buy this one. Steve is the creator of the R/foreach package which someone recently said to me was the most significant contribution to the R environment since the dataframe. I would buy that because I can not remember a presentation that included using R in parallel that did not use foreach. Steve is also an excellent teacher and guide to computer science. He was always able to explain to me what he was doing in a way that I could understand it. Steve was kind enough to send me a draft of the book a couple of months ago, and I purchased my copy yesterday. O'Reilly Site

Wednesday, November 2, 2011

Does Voter Redistricting have to mean Gerrymandering?

I was listening to the Radio on the way into work this morning, and they were reporting that the Texas redistricting plan was going to be challenged in federal court. The issue in this case seems be that the dominate party in Texas drew the lines of the districts so to work in their favor. The basis for the complaint seems to be that a minority group is not a majority in any of the districts. I am pretty sure there is really no "fair" way to draw a line on a map, and I absolutely sure there is no way to ever draw those districts to satisfy the desires of everyone. However, I also believe having politicians drawing these line is like having the foxes guarding the hen house. We have even come up with a term for the redistricting efforts of politicians. We call it Gerrymandering.

It got me to thinking that there must be a basic model out there to do redistricting that would take the control out of the hands of the politicians and not send their proposals to Federal Court at significant taxpayer expense. It turns out there are, and I will repost two of them here:

Thanks RangeVoting.org


Examples of our unbiased district-drawing algorithm in action / comparisons with gerrymandered districts drawn by politicians

(Executive summary)   (Return to main page)   (splitlining pictures for all 50 states)

Advantages

The advantage of having our simple splitting algorithm draw the congressional districts is obvious. There is one and only one drawing possible given the number of districts wanted, the map of the state, and the distribution of people inside it. Which of those people are Liberal, Conservative, Republican, Democrat, Black, White, Christian, Jewish, polka-dotted, or whatever has absolutely zero effect on the district shapes that come out. So you know the maps are going to be completely unbiased. Get politicians to draw the maps and you know that not only are they going to be completely biased, they are also going to be a heck of a lot more complicated-shaped and they are going to use up a lot of your taxpayer money figuring out how to best-rob you of your vote. Which do you prefer? It has been over 200 years. Isn't it time to make gerrymandering a thing of the past?

The shortest-splitline algorithm for drawing N congressional districts (part of our ballot initiative)

Formal recursive formulation
  1. Start with the boundary outline of the state.
  2. Let N=A+B where A and B are as nearly equal whole numbers as possible.
    (For example, 7=4+3. More precisely, A = ⌈N/2⌉, B=⌊N/2⌋.)
  3. Among all possible dividing lines that split the state into two parts with population ratio A:B, choose the shortest. (Notes: since the Earth is round, when we say "line" we more precisely mean "great circle." If there is an exact length-tie for "shortest" then break that tie by using the line closest to North-South orientation, and if it's still a tie, then use the Westernmost of the tied dividing lines. "Length" means distance between the two furthest-apart points on the line, that both lie within the district being split.)
  4. We now have two hemi-states, each to contain a specified number (namely A and B) of districts. Handle them recursively via the same splitting procedure.
Asterisk: If anybody's residence is split in two by one of the splitlines (which would happen, albeit very very rarely) then they are automatically declared to lie in the most-western (or if line is EW, then northern) of the two districts.
See high-precision computer-generated pictures for all 50 states.

Compare the pictures

Tennessee's 9 congressional districts (pdf; as they were in 2004) Don't you love the incredibly gerrymandered shapes of districts 3 and 7? (No House member from Tennessee ever lost a bid for re-election during 1980-2005.)
Great. Now compare with our approximate sketch (png fastest) (tiff second best) (pdf third) (ps) of how they'd instead look as drawn by our completely bias-free automatic splitting algorithm.
Arizona's 8 congressional districts (pdf; as they were in 2004) Yes! We have a new champion for most incredible gerrymander: district 2.
Great. Now compare with our approximate sketch (png fastest) (tiff second best) (pdf) (ps) of how they'd instead look as drawn by our algorithm.
North Carolina's 13 congressional districts (pdf; as they were in 2004); love that district 12, and hello, district 3 actually is divided into two or three pieces since it goes out to sea and comes back to land! Or maybe two of them are connected at low tide? (Was that what Paul Revere had in mind when he said "one if by land, two if by sea"?) And ooh, check district 1!
Great. Now compare with our approximate sketch (png fastest) (tiff second best) (pdf) (ps) of how they'd instead look as drawn by our algorithm.
Massachusetts''s 10 congressional districts (map from Adam Carr's Psephos archive) (gif fastest) (png middle) (jpg slowest) versus approximate sketch of how it would look redrawn with our algorithm: (png fastest) (jpg slower) It was Massachusetts governor Elbridge Gerry who is credited with inventing gerrymandering in 1812. He was voted out of office immediately by outraged voters, but his legacy evidently lives on. Massachusetts has 100% Democratic congressmen and has for at least the last three election cycles, despite having a Republican governor (you can't gerrymander the governor race, since it is a statewide election). Here's a Boston Globe Editorial on the subject.
Texas's 32 congressional districts (side-by-side comparative chart from the Associated Press as printed in the Houston Chronicle 9 Oct. 2003) showing district shapes before and after the extraordinary redistricting in 2003. (jpg) (And here [png] is a closeup on what they did to Austin to split up those annoying Austin voters.) The gerrymandering was not inconsiderable before the redistricting, e.g. check district 4 near Dallas. But, after it – after it – aaah, for total statewide brazenness Texas really takes the cake. Check district 19 (Lubbock in the north West) and the whole East half of the state is made of those long thin districts. And for extra amazement check those closeups on Houston, and Tom DeLay's personal district 22. Yup, definitely Texas is an unbelievable new champion. (Check the 127-page Texas Court decision declaring this totally legal. Before re-gerrying: Texas had 17 Democrat and 15 Republican congress. After, it was 11-to-21 the other way. Christian Science Monitor editorial on this.)
(You can make your own sketch of what Texas would look like with the new scheme. Have fun. Much nicer, no?)
Maryland's 8 congressional districts (gif; as they were in 2004). Holy sea of rattlesnakes Batman! Numerous go out to sea and come back to land districts, amazingly squirelly boundary shapes. Here's approximately how it would look redistricted via shortest splitline (png).
Illinois' 19 congressional districts (pdf; as they were in 2004). Wow. I thought Texas was bad, but Illinois may be the new champ. District 17 is just awesome. But then again districts 19, 15, and 11 are no slouches. You cannot actually see a lot of the small Chicago districts without a magnifying glass, so here is a closeup view (gif) [this picture is from Adam Carr's psephos database] color coded by party control in 2004. Districts 1, 10, 8, 5, and 7 are pretty bad, but district 4 is a wowser! (Notice it looks pretty bad... but then you realize it actually continues west past the point you thought it had ended... going along a thin strip lying entirely inside a highway... which loops around until expanding to enclose a second bad-looking region.)
Great. Now compare with our approximate sketch (png fastest) (tiff second best) (pdf) (ps) of how they'd instead look as drawn by our algorithm.
California? In 2004, not one of CA's 173 state legislative and federal congressional seats changed party-hands. In 2002, every incumbent won re-election, on average with 69% of the vote. California may be the new gerrymandering champion, perhaps even worse than Illinois and Texas, but unlike them its gerrymandering is "bipartisan" that is, arranged by agreement among the Democrats and Republicans to "design their own districts" to make every office holder "safe." (Later note: see this about the new random-commission system CA adopted in 2010-2011!) ( Specifically, CA state law causes any redistricting not approved by at least 2/3 of the state legislature to be challengeable by the voters in a referendum. That forces the Democrats in control of California to gerrymander for at least some Republican legislators too. Republican Assemblyman Roy Ashburn, vice-chair of the redistricting committe, told the San Diego Union-Tribune in May 20-01 "I think it is very possible... we can achieve a bipartisan redistricting... it takes openness and willingness to compromise." And indeed the deal that was struck was to protect almost all incumbents from both parties, allowing Democrats to keep all their seats but also the Republicans could do so, plus the Democrats would draw one new district – mandated by population growth – in such a way that they would get that new one too. Only a few were troubled by the result, e.g. Republican Assemblyman Tim Leslie told the LA Times "We won't have to worry about elections for six, eight, ten years because [the districts] are all pre-set. Everybody wins... What happened to drawing lines for the people of the state rather than ourselves?" ) The devil's jigsaw puzzle (gifs from Adam Carr's psephos database): Northern Calif, Calif Bay Area, Central Calif, Los Angeles area, Southern Calif. All of Los Angeles is amazing but district 53 in Southern Calif. (San Diego) is especially neat since it actually goes out to sea and comes back to land.
Florida? Holy cow, Florida (especially Eastern) may be even worse than California! (color pdf), (Northern FL, gif from Psephos), (Southern FL, gif from Psephos with closeups). The district 22 & 23 tandem team is just beyond belief trying to neutralize as many Democrats as possible so that Florida as a whole can be safely massively Republican-dominated; you really need the closeup map to see its full glory. [Alcee Hastings, an impeached Federal judge convicted of multiple counts of bribery, was re-elected by the 23rd. Gee, I wonder why.]
Many more US state district maps at Wikipedia (jpgs); Ohio, Massachusetts, Oklahoma, and Pennsylvania all are amazing. And http://www.melissadata.com/lookups/mapCD.asp gives you district maps on demand – but I just tried it for my New York district 1 and it was not a very good drawing.
Colorado? The New York Times found a non-gerrymandered district there and expressed incredulity.
And here's a USA Gerrymandering art gallery (gifs).
"There is no issue that is more sensitive to politicians of all colors and ideological persuasions than redistricting. It will determine who wins and loses for eight years." – Ted Harrington, political science chair, UNC-Charlotte, quoted during Shaw v. Hunt trial, March 1994
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former. – Albert Einstein

Does our redistricting method have any disadvantages?

Yes, but they are tiny compared to its benefits. Our district shapes ignore geographic features such as rivers and highways, and political features such as country boundaries inside state maps. As you can see from the example-pictures above, the old approach of politicians drawing districts sometimes used those things, sometimes misused them, and other times conveniently ignored them, all with the principal aim of maximally denying you fair representation.
The advantage of having a purely mathematical definition of the district shapes is that there is absolutely no room whatsoever for bias or any freedom of choice at all by district drawers. That shuts the gerrymanderers down. Period.
We admit you may pay a small price for this, both financially, and occasionally (in some areas) in convenience. But the financial price is tiny compared to the amount you will save from having a less-corrupt, more democratic government. And actually, considering you'll no longer have to pay an unwanted small army of super-biased nerds to figure out how to draw and print those crazy maps, all lot of money will actually be saved with our scheme. In 2001, California State Democrats paid political consultant Michael Berman $1.36 million to draw the US House district map for California, with incumbent Democratic members of Congress paying him collectively about $600,000. "Twenty thousand is nothing to keep your seat," Democratic Congresswoman Loretta Sanchez told the Orange County Reporter. "I spend $2 million [campaigning] every election. If my colleagues are smart, they'll pay their $20,000 and Michael will draw the district they can win in. Those who have refused to pay? God help them." See chapter 1 of Overton's book for this story. In contrast, our high-res computerized splitline districting computation for all 50 states combined cost us approximately 10 cents worth of electricity and less than 1 day worth of time. And the inconvenience will usually actually be "convenience" because it will be much easier to figure out which district you are in if the maps are simple, as opposed to looking like a salamander-octopus mutant diseased with severe hives.

Find out more

Gerrymandering can lead to self-reinforcing one party domination.
Cross-country survey – which countries seem the best and the worst as far as gerrymandering is concerned? And how do they do it?
Do "independent" or "bi-partisan" district-drawing commissions work?
The (somewhat related) fraudfactor.com site.

I do not believe that automated algorithms will result in districts that are more palatable than the one politicians draw up. Models are still simply the result human thought and representation and as a result simple and imperfect. It is just that an automated system could be more understandable to the general population and cheaper to use.

I am amazed I have not seen this one on Flowing Data, but here is a YouTube visualization someone did on a clustering redistricting algorithm of PA which I thought looked very cool.



Finally in an ongoing effort to mention Andrew Gelman in every blog post. I am posting to a link to his 1994 paper that basically shows the surprising result that despite the efforts of politicians to screw up the process it works out pretty well in the end, and better than if nothing had changed. Gelman's paper

So there you have a quick flyover from a 1%er of the 99%. Gerrymandering is bad, and automated algorithm have their own flaws but make cool visualizations, and it all works out in the end. ;)

Tuesday, November 1, 2011

R 2.14 is release and R 2.15 is now a year away.....

I missed this yesterday because I was out trick or treating, but R 2.14 was released. I love that on CRAN it is called R 2.14.0 ( Great Pumpkin). Here is the link to download it now.

David Smith did a nice summary of what 2.14.0 on his Revolutions Blog which I have copied here:


As scheduled, the first release of the new R 2.14 series is now available for download in source code form. As of this writing, pre-compiled binaries for Windows, Linux and MacOS aren't yet available, but will appear on your local CRAN mirror in the next couple of days.
One of the biggest changes in 2.14 is the introduction of the new "parallel" package as a standard part of R. As the NEWS file explains:
[The parallel package] incorporates (slightly revised) copies of packages multicore and snow (excluding MPI, PVM and NWS clusters).  Code written to use the higher-level API functions in those packages should work unchanged (apart from changing any references to their namespaces to a reference to parallel, and links explicitly to multicore or snow on help pages).
In addition, all of the standard (base and recommended) R packages are now byte-compiled on installation, which improves R's performance in several situations.
Other improvements include better alignment of math and text in graphics, Rao's efficient score test for GLM's, the ability to draw curves from functions with plot.function, a new L'Ecuyer random number generator, improved access to documentation (especially vignettes), and several minor bug-fixes.
With R now on an annual (rather than six-monthly) update cycle, R 2.15 is not expected until October 2012 (with point releases for the 2.14.x series likely in the interim).

Thanks David.

2.14.0 continues the trend to incorporate performance enhancements into base R. Parallel packages have long existed for R, but this addition to base R is a nice touch as is the byte-compiling. I believe the changing to an annual release schedule is also a great idea that will work better with the commercial customers than the current system of six month cycles. Most of the commercial R users that I have dealt with update on an annual basis and would choose to update on the odd or even numbered revisions. This meshing of the updates will help those commercial users out a great deal.