Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

The test makers and the test scorers. "Omniscient, no. Omnipotent? Perhaps."

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 09:29 PM
Original message
The test makers and the test scorers. "Omniscient, no. Omnipotent? Perhaps."
We have come in this nation to the point of allowing one test created and graded by companies without regulation and oversight....created and graded in a manner that is protected by trademark, patent, or copyright....to be the main measure of judging success or failure.

Jennie Smith of the Dade County Education Examiner said it quite well.

Do not be fooled. The people creating and scoring those tests are not some educational gods in the sky, omniscient and dedicated to your child's education. Omniscient, no. Omnipotent? Perhaps.

They can decide whether or not your child is held back in elementary school or middle school, or whether or not he/she graduates from high school. They can decide which schools receive what funding. My own school risks losing its administrators after this year if it does not bring its grade back up to a C from its current D--despite the fact that we had enough points to earn a C this year, but a caveat in the grading system prevented us from actually being given that C.
Our administrators are dedicated, smart, hardworking and caring. I consider myself extremely lucky to work in a school with such good administrators; I understand that it is not necessarily a common occurrence. Yet they could all be involuntarily reassigned--along with many teachers--if the standardized testing industry decides (with the arbitrariness described in Farley's book) that it is so.

..."Standardized testing is sucking millions of dollars out of already very needy Florida schools. The cost of creating and administering the tests, plus the cost of having them scored; schools also hire "reading coaches" and "math coaches" whose primary function seems to be scanning tests and compiling pages of statistics for the district. Wouldn't this money be better spent on having smaller, better-equipped classrooms with qualified (and satisfied) teachers?


The book to which she refers is by Todd Farley. It is called Making the Grades: My Misadventures in the Standardized Testing Industry

She refers to some quote by him.

From my first day in standardized testing until my last day, I have worked in a business seemingly more concerned with getting scores put on student responses than getting meaningful scores put on them, a reality that can't be too surprising given the massive scope of the assessment industry and the limited time available to score those tests. Consider if there are 20 short-answer/essay questions on each of the 60 million tests mentioned earlier. That means there would be 1.2 billion student responses that would need to be read and scored within the same two- or three-month time frame. (...)

I don't believe the results of standardized testing because most of the major players in the industry are for-profit enterprises that--even if they do have the word education in their names--are pretty clearly in the business as much to make big bucks as to make good tests. (...) Because the testing company was a for-profit business, I wasn't surprised they wanted to recycle the questions already in their item bank instead of paying someone to write new ones, as I was never surprised during my time in testing when any company opted for expediency and profit over the quality of the work. (...)


Farley went further:

Uncertainty was nearly always evident when committees of teachers came together, whether it was a development meeting when those educators were writing test questions or a range-finding meeting where they were trying to establish or approve scoring systems. Differing opinions were always prevalent. In my time in testing, I consistently worked with committees that disagreed with former committees, committees that disagreed with each other within a committee, and committees that often ended up even disagreeing with themselves. (...) Meanwhile, amid all the differing opinions, and amid all the score changes and rules changes, the assessment industry was ostensibly doing the work of "standardized" testing. (...)

Fifteen years of scoring standardized tests has completely convinced me that the business I've worked in is less a precise tool to assess students' exact abilities than just a lucrative means to make indefinite and indistinct generalizations about them. The idea standardized testing can make any sort of fine distinction about students--a very particular and specific commentary on their individual skills and abilities that a classroom teacher was unable to make--seems like folderol to me, absolute folderol.


In her Examiner post, Jennie Smith tells about an FCAT workshop she attended.

While this is certainly true a great deal of the time, even many multiple choice questions--especially when it comes to reading passages and questions dealing with ideas in the readings--are ambiguous, vaguely worded, and their answer choices often contain more than one answer that could be justified as correct. At one FCAT Reading workshop I attended, we--a room full of about thirty English and reading teachers--debated at least fifteen minutes over the correct response to one multiple choice question. I pointed out the ludicrousness of the situation: that a room full of English teachers could not agree on the right answer to this multiple-choice question, yet it could be this very question that would determine whether or not a student would graduate from high school. These questions were targeted at tenth grade students, and college graduate English teachers could not agree on the answer. How on earth could we expect a tenth grader to get the answer right--and have so much depend on that?


When I was teaching, I never got to see the tests ahead of time. An aide and I had to walk up and down the aisles to keep them on task, but never to help.

During that observation I saw question after question I could not answer definitively. This was during my time in both 2nd grade and 4th grade. Many times there were two answers that fit just as well. It was predictable, of course. There was a really far out one to distract, a more sensible one that was still off base....and two that made fairly decent sense in the context. Sometimes there was really no way to discriminate between the latter two.

Yet those test scores determined everything. The child's educational plans, the teacher's assessment and grading, whether the school itself gets an A B C D or F grade.

This is unacceptable.

Printer Friendly | Permalink |  | Top
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 10:08 PM
Response to Original message
1. Testing industries big four...from PBS
Profiles of the four companies that dominate the business of making and scoring standardized achievement tests.

When Congress increased this year's budget for the Department of Education by $11 billion, it set aside $400 million to help states develop and administer the tests that the No Child Left Behind Act mandated for children in grades 3 through 8. Among the likely benefactors of the extra funds were the four companies that dominate the testing market -- three test publishers and one scoring firm.

Those four companies are Harcourt Educational Measurement, CTB McGraw-Hill, Riverside Publishing (a Houghton Mifflin company), and NCS Pearson. According to an October 2001 report in the industry newsletter Educational Marketer, Harcourt, CTB McGraw-Hill, and Riverside Publishing write 96 percent of the exams administered at the state level. NCS Pearson, meanwhile, is the leading scorer of standardized tests.

Even without the impetus of the No Child Left Behind Act, testing is a burgeoning industry. The National Board on Educational Testing and Public Policy at Boston College compiled data from The Bowker Annual, a compendium of the dollar-volume in test sales each year, and reported that while test sales in 1955 were $7 million (adjusted to 1998 dollars), that figure was $263 million in 1997, an increase of more than 3,000 percent. Today, press reports put the value of the testing market anywhere from $400 million to $700 million.

It's likely that other companies will enter the testing market. Educational Testing Service (ETS), which until recently had little to do with high-stakes testing and was best known for its administration of the SAT college-entrance exam, won a three-year, $50 million contract in October 2001 to develop and score California's high-school exit exam, beating out other bidders such as Harcourt and NCS Pearson.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 11:11 PM
Response to Reply #1
4. I have worked for scoring contracts being managed by NCS Pearson. Composition
tests, being scored on a standard 6-Trait model. Half of the crew I worked with for the better part of one summer, was not qualified to evaluate what these kids had written for 6 or any other number of traits.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 09:02 AM
Response to Reply #4
7. In my 4th grade class, a writing score stunned us all.
One of the students who was so bright and talented in composition, imaginative, creative, just amazing....received a very low score on the FCAT writing test. I read his essay as he wrote it, it was excellent. The top grade in writing was given to a student whose essay I read while monitoring as he wrote it that day of the test. It was incomprehensible, sloppy, just terrible.

I had nothing before me so I could question. The parents of the good writer were irate, but I could do nothing though I agreed with them.

There was no proof of either, and the scores stood because the parents did not have the money to pursue legal action.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 09:42 AM
Response to Reply #7
8. They get people who, though they may have degrees of VARIOUS types, have no experience
either with children's writing or scoring anything. They SHOW them whatever trait model they are using and then that's it, little or no examples of traits, no analytic discussion of the definitions of the words used as labels for the traits, no presentation of various systematic methods to compare what's on the page to what the trait constructs and then translate that into a score, nothing! It amounted to just "Read it and give it a score".

I graded mountains of writing when I taught high school; I have a very systematic way of doing this task, so I was okay. The testing companies are trying to beg off on this because they know there is disagreement within disciplines about how to do these things. That's true, but that doesn't mean that A - N - Y - T - H - I - N - G is okay.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 09:50 AM
Response to Reply #8
10. ,,,
Edited on Mon Jan-11-10 10:29 AM by madfloridian
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 10:26 AM
Response to Reply #10
11. uh, . . . It is possible to score for Creativity. I'm not sure what you mean by ritualistic.
When you are dealing with hundreds and hundreds of individuals, it is possible to be systematic so as to create some basis for saying why a given score is different from another. Method serves the group, but it does not have to be 100% of what serves the group; it is ALWAYS possible to include "creativity" in equal weight to whatever other more methodological factors you are using.

I'm not sure what you're talking about, but I can tell you that there was nothing ritualistic in MY relationships with my students.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 10:29 AM
Response to Reply #11
12. Never mind, that is not what I meant.
I was not at all critical of you. I will delete it. My point was there must be room for creativity somewhere in this life.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:19 PM
Response to Reply #12
22. Well, I do kind of think it is possible in some circumstances to see ritualistic pedagogy.
Certain post-secondary disciplines, teachers who have published and have status in their field . . . maybe also in primary and secondary when there is pressure to assure everyone that the system works, when it doesn't always, or if a teacher has some other kind of political or religious agenda that is higher than good pedagogy.
Printer Friendly | Permalink |  | Top
 
tblue37 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 12:06 PM
Response to Reply #11
16. Well, certainly, but we were not encouraged to reward creativity unless
Edited on Mon Jan-11-10 12:13 PM by tblue37
it fell within certain narrow boundaries. Certainly a student who used sophisticated language in complex, varied and well-structured sentences ended up with a high score, unless the student's essay was outside the "box" that the rubric set up. Honestly, Patrice, I did see several fine essays lose points because they were not formulaic enough. Basically, what the rubrics wanted were standard five-paragraph themes. As "training wheels" the five-paragraph theme is certainly useful at the lower grade levels, but it is formulaic and it does inhibit more sophisticated thought and writing if it is fetishized. For more on this, check out my article on my Essay, I Say site entitled "A Partial Defense of the Five Paragraph Theme as a Model for Student Witing": http://www.essayisay.homestead.com/fiveparagraphs.html

I would say that within a certain range, the rubrics were applied quite reasonably, but there were a number of fine essays that got unfairly mediocre scores. Also, there really was pressure to not give too many low scores in a single class or school--even if the essays deserved those scores.

We see such pressure at the college level, too, though in the opposite direction. The administrators worry about grade inflation, so if someone's class has a high overall GPA, that teacher might be questioned about it. I grade the essays, not the students and not the classes. Thus, if I have a class full of unusually strong or weak writers, that class's GPA might fall above or below the mid-C range. Usually my class GPAs are at a mid--C, but sometimes they are not. When that happens, I don't look for students to give higher or lower grades to just to smooth out my GPA--I give the essays the grades they deserve and let the class GPAs fall where they will.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:34 PM
Response to Reply #16
23. Much depends upon what the elements of the rubric(s) are and, hence, how relevant they are
to the writing at hand.

Rubrics can also be constructed so that they consist of high-level (super-ordinate) traits which can be manifested in writing in many and various specific ways. I think sometimes the trouble is that we get rubrics that are inverted and students have to produce only certain specifics, rather than any number of specifics all of which could meet the criteria if handled clearly. Of course, all of that has also to do with how the writing task is constructed and what kinds of questions are posed.

Students will fight you over "open ended" questions; I think they prefer the safety of closed questions; they are also easier and less work, but open-ended questions do more justice to the variations in individuals, IMHO.
Printer Friendly | Permalink |  | Top
 
tblue37 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 11:56 AM
Response to Reply #8
15. I have taught college English since 1972, so I also have
Edited on Mon Jan-11-10 12:14 PM by tblue37
plenty of training in and experience with grading student writing. Like you, Patrice, I had no trouble qualifying as a scorer--and even worked as a supervisor. But many of the scorers really should not have been there. All that was required was a 4-year degree in any subject, not any background, training, expertise in writing or in teaching writing.

Of course, each new contract required that scorers be trained to apply the rubric and then take qualifying tests on applying the rubric. If the scorer did not paass the qualifying tests, he or she was not hired to score. But again, the rubrics were fomulaic, and the way we were required to apply them dod not allow for much originality.

If they were properly applied, the rubrics would not have been a problem. In fact, I modified Arizona's six-trait rubric for my own college English classes, and it has worked well for my students. But again, I don't use the rubric as a blunt intrument to enforce conformity, but as a list of writing elements for my students to think about as they write and reviseheir essays.



Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:12 PM
Response to Reply #15
21. There is no substitute for in-depth experience with writing and writers. Rubrics are useful, but
they can't take the place of someone who recognizes something such as Voice when they see it.

Of course, writing should be evaluated relative to its purpose and audience and there are a variety of purposes and audiences, not all of which are served by too much, or inadequately developed, creativity.

I too have constructed my own rubrics derived from various cognitive models. They were most useful for helping seniors who were preparing for assessments that included a few timed short-essay questions.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 09:46 AM
Response to Reply #7
9. I also process evaluations in my current job. The number of times that
it appears that someone has reversed a Likert scale, thinking 1 is the high and 5 is the low, is significant.
Printer Friendly | Permalink |  | Top
 
tblue37 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 11:47 AM
Response to Reply #7
14. See my post 13 below and you will understand why your marvellous
Edited on Mon Jan-11-10 11:48 AM by tblue37
writer got a low score. He did not write in the formulaic stye the rubric required. That's the thing about genuinely terrific writers--they are original, not formulaic. But originality gets penalized when the rubric requires everyone to do things a certain way. The scoring rubric enforces conformity--and mediocrity.
Printer Friendly | Permalink |  | Top
 
salin Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:48 PM
Response to Reply #7
27. in my state - we are able to receive electronic scans
of the responses on open ended questions - and can request rescoring. It is pretty rare that a score is changed - but one does have access to the students work, the score, and the opportunity to challenge (though one can make no grounds for the challenge - it is more like flagging for reinspecting, with no outlet to challenge the results of the reinspection.)
Printer Friendly | Permalink |  | Top
 
tblue37 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 11:44 AM
Response to Reply #4
13. I also used to score essays for them (for 5 years). It is absolutely NOT a good way to
Edited on Mon Jan-11-10 11:45 AM by tblue37
judge a student's abilities. One major problem is that the standardization forces scorers to penalize those students who do not match the formulaic style and structure requirements, even if the student's essay is far better than the average--or, as in a couple of cases I encountered--actually brilliant. Sometimes the brightest and best writers would show originality that simply did not fit into the formulaic style that the rubric was looking for. When that happened we were required to penalize them. Basically, everyone is taught to write in a mediocre, predictible style in order to satisfy the scoring rubric.

Also, since so much money was riding on these scores, we were pressured to maintain a standard distribution in each class and school, regardless of the objective quality of the essays. Thus, if you were scoring a school full of ESL writers (like Spanish speakers in some Texas schools) whose English writing simply was not up to par yet, you would be pressured not to give too many low scores. Of course, it is unfair to expect ESL writers to compete on the same test with native English speakers, but since that is what they were doing, the ESL writers were not writing acceptable English essays. Nevertheless, a scorer who did not bow to the pressure to maintain a standard distribution would not be likely to stay on the job for long. The same was true in low-functioning schools, where many students who were native speakers of English were not writing acceptable essays. Again, there was a lot of pressure not to score accurately if that meant there would be too many low scores.
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:45 PM
Response to Reply #13
26. That's why I left teaching, too much pressure to mess with the numbers. Yes, grades are NOT
perfect, but if you've done everything a good teacher should do, you MUST take a stand somewhere. You can't just keep shifting in response to pressure. Perhaps holding a hard line would be more valid and acceptable if there were more and richer authentic assessment.
Printer Friendly | Permalink |  | Top
 
proud2BlibKansan Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 10:20 PM
Response to Original message
2. There is no such thing as a perfect score on a standardized test
Blew me away when I first learned this. I was in grad school and at that time (early 90s) the kids were taking the ITBS. So in my Assessment class, we took the test. I think it was a 6th grade ITBS? Anyhow we took the test and we all noticed that there were 4 or 5 questions where none of the multiple choice answers fit. And it wasn't a matter of debate; some were math problems.

So after we expressed our outrage, our professor said he was going to clue us in on a little secret. And that secret was that there was no such thing as a perfect score on any standardized test. He went on to explain that in order for the norms to match each student's performance, they had to eliminate a 100% score.

So no one can ace the test.

Still just infuriates me.

I tell my kids this every year. I know, it may discourage them from doing their best but I think spending a bunch of wasted time on a problem they can't answer correctly is pretty discouraging.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 10:58 PM
Response to Reply #2
3. They have it all down to a system.
And that is just a shame. No perfect score...says so much.
Printer Friendly | Permalink |  | Top
 
lostnfound Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Jan-10-10 11:19 PM
Response to Original message
5. Politically motivated test questions are cropping up more often as well
I've seen questions in 3d grade English books talking about the benefits of globalization. Questions on the GRE seem designed to place you on a political scale more than on an abilities scale. It would be fascinating to know how test results for kids today would appear if they were tested on tests produced a few decades ago. I am a college graduate with an "IQ" of about 150 (or was, at one time, before age took its toll), and I often have trouble figuring out what the right answers are for my son's vocabulary tests, and occasionally his math questions.
I know the meanings, I know the math, but how to answer it the way they want it to be answered? Sometimes it is very hard to understand. Occasionally, the questions are plain wrong -- I know the answer they are looking for, and it's not the right one.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 01:10 AM
Response to Reply #5
6. Yes, some teachers recently told me that.
About the ones geared to political motives. Not good.
Printer Friendly | Permalink |  | Top
 
tblue37 Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 12:20 PM
Response to Reply #5
17. I also am highly educated with a high IQ,
though that just means I am very good at taking that sort of test. I do extremely well on most kinds of tests. But I have trouble on some multiple choice tests--and also when I have to fill out surveys and forms of various kinds--because the phrasing of the questions is often not at all precise. Those who write the questions are obviously not aware of how ambiguous their language is, and I guess many people who fill out such forms (or who take such tests) do not see the nuances and the ambiguities, but anyone who is at all aware of such nuances in language can be stymied by the way some of those questions are phrased!
Printer Friendly | Permalink |  | Top
 
patrice Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 09:04 PM
Response to Reply #17
29. Completely true and add a variety of ESL to the audience (the kind of culture that is common in
technology, for example). Even with English speakers people seem to have no idea how ambiguous they are; what difficulties ESL folk must have!!

Then there's also your English users who are sooooooooo habituated to their specialty that they leave enormous chunks of cognitive material out of their "communications" entirely. People like my dear departed second husband, a brilliant man, NMSQT finalist, free ride all of the way through Yale under-grad and Yale law, so used to talking "Attorney" that he had no idea that he was almost completely incomprehensible. Did great legal work, but otherwise? . . . . yikes!
Printer Friendly | Permalink |  | Top
 
juno jones Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 02:04 PM
Response to Original message
18. K&R! n/t
Printer Friendly | Permalink |  | Top
 
FairWinds Donating Member (14 posts) Send PM | Profile | Ignore Mon Jan-11-10 02:45 PM
Response to Reply #18
19. Big Thanks to Mad Floridian
I've been reading Mad Floridian's posts on education (esp. on Florida) for a while, and just want to thank him/her a heap for doing such a great job. It is clear that a lot of work goes into those posts. They are very well researched and clearly presented.
Ohio, where I live, is also a poster child for horrible education policy. At a leader in the move to charter schools, we give them hundreds of millions of $ each year. It is safe to say that at least half of it is stolen by crooks of various stripes.
So Floridian, Ohio would also be a good target for your research.
Lastly, it must be said that Obama and his Ed Sec Arnie Duncan are really disappointing with respect to both high stakes testing as well as charter schools.
Printer Friendly | Permalink |  | Top
 
madfloridian Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-12-10 12:42 AM
Response to Reply #19
30. Yes, I agree. It is disappointing.
Thanks for the nice words.

:hi:
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 06:15 PM
Response to Original message
20. Additionally, the "contract" for some tests go to political friends...
The millions and billions in testing are wasted; unless the test is useful to your child, you (the parent), and the teacher. The vast majority of high stakes tests are worthless.

:dem:
Printer Friendly | Permalink |  | Top
 
Prometheus Bound Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:35 PM
Response to Original message
24. Good thread. Thanks.
Printer Friendly | Permalink |  | Top
 
salin Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:44 PM
Response to Original message
25. Never said this before - but if I could I would give this multiple rec.s
I have had a number of colleagues who have spent time doing test scoring. The stories they tell are hair raising. The published rubrics for rating open ended response questions had nothing to do with the scoring directions .... did they include one of a set of specific words was more important.

Thanks for posting this - I hope that many read it.
Printer Friendly | Permalink |  | Top
 
Starry Messenger Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Jan-11-10 08:58 PM
Response to Original message
28. Bookmarking.
So important. Thank you.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Dec 26th 2024, 06:32 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC