Enter your email address to subscribe to Irregular Times and receive notifications of new posts by email.

Join 484 other subscribers

Irregular Times Newsletters

Click here to subscribe to any or all of our six topical e-mail newsletters:
  1. Social Movement Actions,
  2. Credulity and Faith,
  3. Election News,
  4. This Week in Congress,
  5. Tech Dispatch and
  6. our latest Political Stickers and Such

Contact Us

We can be contacted via retorts@irregulartimes.com

Unity08 First Vote Methodology Plagued With Multiple Problems

As the days pass, more and more information about the methodology Unity08 used to carry out the corporation’s “first vote” has come to light, and it doesn’t look good. Here’s what we know so far:

Unity08 sent out its invitation to participate in this “first vote” not just to current members of Unity08, but to others as well. As a matter of fact, Unity08 went out of its way to specifically solicit the participation of the owners of websites that “actively discusses politics and the state of the nation,” inadvertently including website owners who aren’t American. This step clashes directly with the contention by Unity08 in a press release yesterday that its “first vote” was “offered online to the Unity08 membership” and “average Americans.”

After two computer programmers raised concerns regarding Unity08’s declaration that it had only used a subsample of completed surveys for analysis and reporting purposes:

I was shocked to read Slide 3. “… a sampling of Unity08 member completed studies (2931) were taken at random …”. It causes frustration and doubt. It also puts credibility into question. I too am a programer analyst (30yrs) and agree with your technical statement.

A previous comment mentioned a security concern. Are we unable to know duplicate entries ? Are there unauthorized people in the study results ?

Could it be that the data is not in one place ?

If we took random data from a questionable data base we have a questionable study.

This doubt needs to be addressed. I think we all deserve the clear, true information on the study data base.

When I spent the time to fill out the survey, I certainly wasn’t told that my time and feedback was going to be completely worthless and a waste of time. There is a reason that people don’t vote, they don’t think their vote counts. Unity ’08 proved that my vote doesn’t count either. Instead of tallying EVERY SINGLE survey into this study, they took a random sample.

Don’t tell me that “it’s too expensive” blah, blah, blah. I’m a computer programmer (to put it in terms that everyone can understand). This was an online survey. The point at which someone clicked is designated by an exact point on the line (x coordinate). From there it is a VERY simple aggregate to sum up the responses of everyone and then divide by the number of responses to get an average. It takes just as much effort to tally 1 survey as it does to tally 1 million.

Is this a preview of the voting process? I vote for a candidate and then Unity ’08 just takes a random sample of the votes to determine the outcome? I expect more from an organization than to start throwing away responses with the very first survey it ever does.

Unity08 Vice President Bob Roth indirectly responded with the following information:

I am remote right now, so I don’t have access to the exact number, but there were around 22,000 completed studies. Because of the load that they place on the server, we ran some of them in a highly secure mode (high load) and some in a less secure mode (low load) in order to get all of the invitations distributed in a timely manner. We then took a random sampling of the secure studies, 2931 of them as it turns out, and ran the report with them.

and

For load reasons. The more secure version was very server load intensive, while the less secure version could handle a large number of requests. In order to get the invitations sent in a timely manner to all the members, we split them between the secure and non-secure versions. The study itself was exactly the same. The highly secure version locked a member’s study so that it could not be accessed again. The less secure version could be accessed again, so we only recorded the first completion of the study.

The sampling was random across the secure versions of the study.

Remember, we are pushing the limits of existing technologies while also building things that will be entirely new. This is only the first vote and we’ve already done a lot of work behind the scenes to increase our ability to accept more connections in secure mode for the next version.

So from approximately 100,000 members and an undisclosed number of nonmembers 22,000 survey completions were returned. Some of these surveys were carried out on a secure server and some on an unsecure server. Unsecure survey responses were simply not counted. Let’s say that again: unsecure survey responses were simply not counted. Because this change was due to problems unanticipated by Unity08 in the middle of the “first vote” process, it is unlikely that the designation of some members’ (and non-members’) surveys as secure and others as unsecure was either random or representative.

So we have a large set of “first vote” invitations, sent out to approximately 100,000 members and an undisclosed number of nonmembers.

Then we have the 22,000 “first vote” completions.

Then some of those votes — an undisclosed number — were thrown out.

Finally, from the unknown remaining number of votes, a “random” (simple random? cluster random? stratified random? weighted random? this is undisclosed) subset of votes (why 2,931?) was selected for analysis and reporting in Unity08 press releases. As the two computer programmers mentioned above noted, this decision is quite odd, since it takes hardly any more computer power to run cross-tabulations on 2,000 cases than it does to run cross-tabulations on 10,000 cases or even 100,000 cases. Any of those analyses can be run using Microsoft Excel and a desktop computer.

Actually, when I write “subset,” I should write “sub-sub-subset.” Since invitations to complete the “first vote” went out beyond the target population of Americans, since at two stages the winnowing process is non-random, and since at the third stage the “random” methodology for case selection is undisclosed by Unity08, the end result cannot be trusted to be representative of whatever larger group Unity08 is interested in. This is a methodological mess which has no business serving as the basis for any public relations declarations or political decisions.

In nine months’ time, Unity08 claims it will be ready to carry out a first-of-its-kind nationwide online secure nomination for the most powerful office in the entire world. Do you think it will be ready?

1 comment to Unity08 First Vote Methodology Plagued With Multiple Problems

  • Joseph

    Hopefully in the next vote they’ll do them all secure. I think it makes sense excluding votes which may have been illegitiamte (i.e. the unsecured votes), simply because people could have most easily rigged those votes by taking the survey multiple times. I really don’t get why they didn’t just include all of the secure votes though.

    Hopefully, next time, they’ll have a better system where all votes are secure and accurate…and by American voters.

    I still, however, would love to know why no attention is being given to the fact the DNC (and possibly RNC) are planning to disenfranchize millions of voters in their primary elections and not let them have any voice in nominating the candidate for President. 1,000,000+ people>100,000 people….

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>