Re the SurveyFail & clearing one's survey answers

Sep 02, 2009 00:11

As fans have noted, even if Ogi Ogas & Sai Gaddam's survey did not record IPs (which is what they say, and there is no evidence that this is not true), they did use cookies to track each computer's (not users: another problem) responses to the survey. The cookie preserved the link, and thus that computer's (or user's) responses and lets them, for instance, go back in and complete or change their own survey answers, until such time as the user clears their cookies and/or the survey data are cleared by Ogi&co.

This is why users who decided the survey was a POS could go back in and backspace out their fill-in-the-blanks.

But what about those radio buttons? One could only select a *different* answer, not "no answer."

Ogi admitted (sorry, no link at the moment) that this was a problem he hadn't thought through. (um yeah, and about 4^400 others.)

It is indeed a violation of social survey professional ethics to fail to allow respondents a way to clear ALL of their answers should they decide not to participate. To quote one academic expert in social research methodology (not a fan, but was forwarded the links as an example of stupendously ill-conceived research): "the fact that radio buttons were used in the responses meant that people had to go find a script to remove the answers rather than change them. It is unacceptable to place the burden of anonymity or confidentiality upon participants in research."

However, das_dingsi on IJ, following the lead of helens78 on DW, found a Greasemonkey script which allows you to uncheck radio buttons in surveys.

As Dingsi said: A lot of people have wished they had had this option in the recent Survey!Fail debate, but in fact it's quite nifty in general. I created a test poll so you can check it out after installing the script. (It works for me.)

Needed are, of course, Firefox and the Greasemonkey addon (see this post for more info, GM is listed under #6).

Feel free to check it out - I didn't try it myself, not having a desire to install Greasemonkey ATM - and save for the next time you get inveigled into a boneheaded DNW survey.

The best solution, of course, is to intercept would-be researchers *before* they start trampling people's privacy and triggering them with offensively sexist and other biased language. I have volunteered to be part of any group that pulls together an FAQ of links and info on what survey or other such research would look like if it is well-designed, ethical, respectful and above all, alert to legal and professional requirements concerning explanation, privacy, consent, minors, *data removal*, and so forth. My training is not in social research review boards (IRB's and such) but in anthropological research, and I'm also in the Amer. Sociol. Assoc (with a very strict standards code) and the AOIR, association for internet research.

There is no "one ring" universal set of regs for social research on the internet. Rather, given the complexity of people's cultures and of research goals, groups like the AOIR have worked to frame "pluralistic" models that describe a range of acceptable research practices rather than insisting one standard will work across all the complexity of the global web and its communities.

One of the interesting points they raise in a working document available online here, is that there are two quite different kinds of standards depending on what background you come from. (Congratulations, Ogi, you managed to violate them both!) The regs from the US NIH come out of medicine and science with the perspective that internet users need protection from unethical researchers; these standards work to prevent subjects from being *exposed* and victimized or exploited by researchers. A contrasting perspective is from the humanities background, that sees online users as communities of artists, producers of creative work, with considerable agency and authority in their own right.

These different views, while both concerned with research ethics, can result in completely opposite treatments of research subjects. For example, "well-intended human subjects' protections as applied to activist sites lead to unethical consequences [when] these protections (confidentiality, informed consent, etc.) work to reinforce a larger social marginalization." They actually can do harm by constructing the subjects as people who 'need protecting' because they are powerless, and are powerless because they are (to paraphrase Ogi Ogas) "contrary to what society expects."

But there is a second view of Internet material "as cultural production - and thus the provenance of humanities disciplines such as art history, literature, film and media studies, theatre and performance studies, etc." Using human subjects guidelines from medical and social science exclusively "leads to the unethical consequence of suspending the critical analyses of, say, racist and homophobic hate sites - analyses central to the activist agendas of critical race studies, gay and lesbian studies, feminism, etc."

In this area of potential conflict, the alert reader will notice some relevance to discussions fandom seems to be having all the time about "not posting negative criticism of their behavior in other people's journals & comms" vs. "speaking out against racism, etc., whenever & wherever it is."

I love it when wanks - or rather, the analyses of wanks - link up together on the meta or theoretical plane. This will be fun to examine!

Also, I would love to get some information sorted out for fandom, because these dunderheads really tainted the lake for other researchers who, down the road, would like to do decently planned and ethically administered research without everyone saying OMG MORE SURVEY FAIL, GO AWAY. It sucks to have your field - whether neuroscience, social research, fandom/media studies, internet studies - be given an epic bad name by people who aren't even in it.

brainfail, surveyfail, internet, tools, fandom studies

Previous post Next post
Up