To Work Better, Try Working Less

It was 4 p.m. on a recent Friday—a time of the week when I usually relax and leave the rest of my to-do list to finish over the weekend. But as this recent weekend approached, I kept pushing myself, heart pumping, to get to the bottom of my list of planned tasks for the week.

After years of working on and off throughout most weekends, I was trying a new approach by taking off at least one entire day every weekend this month, away from reporting, writing and all other work. Early on, I hated it. As simple as it seemed, sticking to a time-off plan stressed me out at first. What I didn't see right away was that my little test was forcing me to improve the way I work.

Amid layoffs and burgeoning workloads, it seems, working any time, all the time, has become a habit. A survey of 605 U.S. workers last spring by the Society for Human Resource Management found that 70% of employees work beyond scheduled time and on weekends; more than half blame "self-imposed pressure." Now, new research suggests some have reached the point where a paradoxical truth applies: To get more done, we need to stop working so much.

Work & Family Mailbox

Down Time

Sticking to a schedule of predictable time off can lead to improved productivity. Here are some steps to get started:

  • Agree on future goals with your boss and coworkers.
  • Plan for deadlines far in advance.
  • Set, and focus on, top hourly, daily or weekly priorities.
  • Cooperate with coworkers to back each other up.

A groundbreaking four-year study, set for publication in the October issue of Harvard Business Review, seems to confirm that getting away from work can yield unexpected on-the-job benefits. When members of 12 consulting teams at Boston Consulting Group were each required to take a block of "predictable time off" during every work week, "we had to practically force some professionals" to get away, says Leslie Perlow, the Harvard Business School leadership professor who headed the study.

But the results surprised Harvard researchers and Boston Consulting executives alike. Working together to make sure each consultant got some time off forced teams to communicate better, share more personal information and forge closer relationships. They also had to do a better job at planning ahead and streamlining work, which in some cases resulted in improved client service, based on interviews with clients. Boston Consulting is so pleased with the outcome that the firm is rolling out a similar teaming strategy over the coming year on many new U.S. and some overseas projects, says Grant Freeland, senior partner and managing director of the firm's Boston office. "We have found real value in this," he says. "It really changes how we do our work."

Other companies are putting the brakes on work in other ways. At KPMG, a professional-services firm, managers use "wellness scorecards" to track whether employees are working too much overtime or skipping vacation, a spokesman says. At Fenwick & West, a Silicon Valley law firm, "workflow coordinators" review attorneys' hours to avert overload.

And at Bobrick Washroom Equipment, North Hollywood, Calif., a 500-employee manufacturer, staffers are expected to leave in time for dinner. "If you walk around here at 5:30, there are going to be very few lights on, and that's what we expect," says Mark Louchheim, president. He sees family dinners together as important to the well-being of employees and their children, and he also believes setting limits on work motivates people to work smarter.

In the Boston Consulting study, most of the four- or five-member teams were asked to guarantee each consultant one uninterrupted evening free each week after 6 p.m., away from BlackBerrys and all contact with work. Each team held weekly meetings to talk about the time-off plans, work processes and what consultants called "tummy rumbles"—gut worries or concerns about their project.

Requiring hard-driving consultants to take time off was "nerve-racking" and awkward at first, says Debbie Lovich, a Boston Consulting executive who headed one of the teams. Some fought the idea, claiming they would have to work more on weekends or draw poor performance ratings.

But the point of the experiment wasn't to eliminate the "good intensity" in work—the "buzz" from constant learning and "being in the thick of things," Harvard's Dr. Perlow says. Instead, researchers targeted "bad intensity"—a feeling of having no time truly free from work, no control over work and no opportunity to ask questions to clarify foggy priorities, she says.

Ms. Lovich adds: "We wanted to teach people that you can tune out completely" for a while and still turn out good work. The work itself became the focus, "because if you know a night off is coming up, you're not going to let things spike out of control," she says.

After five months of predictable time off, internal surveys showed consultants were more satisfied with their jobs and work-life balance, and more likely to stay with the firm, compared with consultants who weren't part of the experiment. As word spread, other consultants began asking to join the study, Ms. Lovich says. And some clients told researchers the teams' work had improved, partly because improved communication among team members kept junior consultants better informed about the big picture.

Bobrick Washroom Equipment's policy to get workers home for dinner came as a shock to Janice Blakely when she joined the company years ago after working "long, long hours" at an energy concern, she says. Seeing staffers at Bobrick leave by 6 p.m., "I thought, 'Wow, this is not normal."' But in time, the policy "made me look at my performance and tighten up on what I'm doing," says Ms. Blakely, a marketing manager.

Mr. Louchheim, the Bobrick president, says that employees who habitually stay late may be revealing poor work habits. "We worry about whether they can delegate properly and prioritize their work," he says. Adds Chris Von Der Ahe, a Korn/Ferry International recruiter who works with Bobrick: "People who do well there are well organized and able to plan their work well."

Dr. Perlow says an individual worker can get similar results "by challenging oneself to say, 'I'm going to cut off' " work at a certain time every day or every week. " 'Now, how am I going to get work done in the time I have?' This is meant to open your eyes to the possibility" that the way you work can be changed.

In my own experiment, I have managed to keep at least one weekend day work-free so far this month. This has forced me to put proven time-management principles into practice: Plan blocks of work time and stick to the plan; set short-term deadlines to keep work from spiraling out of control; and keep up with email daily, to avoid backlogs.

The rewards have been surprising. On one recent Monday, after an invigorating weekend of working out, attending church and watching college football and hiking with friends, I quickly solved a work problem that had baffled me the previous week. Asked to assess my work this month, my editor, John Blanton, said my columns have been fine. "I'd say, from our perspective, start enjoying your weekends," he wrote in an email.

This, I hope, will get to be a habit.

Write to Sue Shellenbarger at sue.shellenbarger@wsj.com

The Dirty Little Secret About the "Wisdom of the Crowds" - There is No Crowd

The Dirty Little Secret About the "Wisdom of the Crowds" - There is No Crowd

Written by Sarah Perez / September 17, 2009 7:58 AM / 31 Comments

Recent research by Carnegie Mellon University (CMU) professor Vassilis Kostakos pokes a big hole in the prevailing wisdom that the "wisdom of crowds" is a trustworthy force on today's web. His research focused on studying the voting patterns across several sites featuring user-generated reviews including Amazon, IMDb, and BookCrossing. The findings showed that a small group of users accounted for a large number of ratings. In other words, as many have already begun to suspect, small but powerful groups can easily distort what the "crowd" really thinks, leading online reviews to often end up appearing extremely positive or extremely negative.

Small Groups, Big Impact

To conduct the research, Kostakos worked with a large sample of online ratings. As MIT's Technology Review reports, the researcher and his team studied hundreds of thousands of items and millions of votes across all three sites. In each and every case, they discovered that small numbers of users accounted for the largest number of ratings. For example, on Amazon, only 5% of active Amazon users ever cast votes on more than 10 products but a small handful of users voted on hundreds of items. Said Kostakos, "if you have two or three people voting 500 times, the results may not be representative of the community overall."

This is hardly the first time that the so-called "wisdom of the crowds" has been called into question. The term, which implies that a diverse collection of individuals makes more accurate decisions and predications than individuals or even experts, has been used in the past to describe how everything from Wikipedia to user-generated news sites like Digg.com offer better services than anything created by a smaller group could do.

Of course, we now know that simply isn't true. For one thing, Wikipedia isn't written and edited by the "crowd" at all. In fact, 1% of Wikipedia users are responsible for half of the site's edits. Even Wikipedia's founder, Jimmy Wales, has been quoted as saying that the site is really written by a community, "a dedicated group of a few hundred volunteers."

And as for Digg.com, a site whose algorithm is constantly tweaked in attempts to democratize the votes of its users, it still remains a place where a handful of power users can make or break getting a news item to the site's front page.

Attempts to Address the Issue

It's not surprising then to discover that, when it comes to review sites, it's again small groups that are in control there too. Some sites, including Amazon, attempt to address this discrepancy by allowing users to vote on the helpfulness of reviews - a much easier process than having to write a review yourself. Also, local business finder and recommendations site Yelp implemented ways for business owners to respond to what they feel are inaccurate reviews by way of an owner comments feature. Unfortunately, despite these efforts, the small groups still remain in control of these so-called "popular opinion" features.

According to the article, another professor at CMU, Niki Kittur, suggested that sites create new tools for transparency. For example, there should be an easy way to see a summary of a user's contributions which would quickly reveal any bias. He also suggested removing overly positive and negative reviews.

Earlier this year, we looked at another user-generated review site which attacked this problem from another angle. Lunch.com, a new Yelp competitor, uses something they call their "Similarity Network" which matches you to other site users who share your interests. That way, instead of looking at a list of reviews which could originate from anyone with an agenda or axe to grind, you're focused on reviews from others like you.

Still, there is yet to be a perfect solution to the problem. Perhaps it's time we give up the idea that the "wisdom of the crowds" was ever a driving force behind any socialized, user-generated anything and realize that, just like in life, there will always be active participants as well as the passive passerbys.


0 TrackBacks

Comments

Subscribe to comments for this post OR Subscribe to comments for all ReadWriteWeb posts

  1. I think your headline is misleading and Vassilis Kostakos should read the book before poking holes.

    Surowiecki is very clear about the conditions necessary for a wise crowd to prevail and those conditions are:

    1. Diversity of opinion
    2. Independence
    3. Decentralization
    4. Aggregation

    If your crowd possesses those qualities then it is wise and then it will be better at making decisions under Surowiecki's paradigm. The crowds used in the research (and the crowd in general) doesn't possess those qualities and therefore is an unfit data set. We should be trying to create the ideal crowd before we can obtain superlative results and not try to get good results from any random crowd.

    Muhammad Saleem

     Posted by: Muhammad Author Profile Page | September 17, 2009 8:18 AM



  • yep there is no crowd listen to yourself wikipedia and digg is manipulated by their owners itself
    http://Evolvhealthy.com

    Posted by: evolvhealth | September 17, 2009 8:20 AM



  • You mean to say, of course, that Wikipedia is edited by a minority of unaccountable zealots

    Posted by: Bruno | September 17, 2009 8:34 AM



  • Wisdom of Crowds is a crypto-fascist idea; there is no objective truth, there are no facts, truth is what "the crowd" decides it is. You get these unhealthy echo chambers of "activists" setting the agenda.

    This article said it best, over three years ago:

    DIGITAL MAOISM
    The Hazards of the New Online Collectivism
    By Jaron Lanier

    Posted by: Peter Verkooijen | September 17, 2009 8:43 AM



  • There are no small groups without a crowd. This is rudimentary logic. And small groups are only representative of something if they are part of a large crowd. Otherwise, the small group does not actually create any impact. Research conducted by prof. Vassilis Kostakos is valid, but does not truly question the vision of Wisdom of the crowd.

    Posted by: Alex | September 17, 2009 8:48 AM



  • it's all about the niche. It's the selection of the relevant from the crowd of the diverse that really drives wisdom.

    Posted by: davidcushman | September 17, 2009 8:54 AM



  • Hi Sarah,

    Nice summary. For those running UGC sites, it's no secret that a small percentage of the user base creates the vast majority of the content.

    At RateItAll (www.rateitall.com), we are trying a number of things to improve the trustworthiness of our content:

    - We offer a newsfeed format that populates your review stream with a combination of reviews from your friends, and reviews of topics you are interested in. We give you the option of moving out in degrees of network separation (e.g. "show me reviews of friends of friends of friends") with the idea that networks are self policing. Presumably, people you know in real life won't deliberately steer you the wrong way.

    - We provide "compatibility tests" that let you find reviewers who most resemble you in outlook on a category by category basic (e.g. music vs. politics). This was the inspiration for Lunch's similarity network.

    - We offer a proprietary "confidence score" that helps determine the best product / item in a certain category. This score factors in a number of results - not just rating. See this dog food ranking for an example: http://www.rateitall.com/t-353-dog-food-brands.aspx (Yelp is also aggressive with automated algos to weed out reviews that look shillish)

    - We provide total transparency on a reviewer's rating history. Clicking a user's profile name link shows you immediately all of their ratings activity on the site, without making you dig.

    - We provide review by review voting.

    - We allow commenting on any review - if you shill, you will likely be called out publicly on it by our regulars.

    - We encourage people to login using Facebook Connect (real identity is a major factor when talking about trustworthiness of a review - there's a new site called Oyster.com that ONLY will accept reviews from folks logged in via FB Connect)

    - As a horizontal rating site, we try and offer something for everyone. The more data and more participants, the more trustworthiness.

    - Similarly, we have tried to remove as many barriers to participation without losing identity - for example, we recently launched a tool that lets you post a review and signup just by sending an email to reviews@rateitall.com - the product / place you are reviewing goes in the subject line, and the rating and review goes in the body.

    Transparency. The ability to isolate reviews from people you trust in real life. Trust algorithms. People discovery. Reputation. Broad participation. Real identity. Accessibility.

    In my opinion, these are the keys to maximizing the trustworthiness of user generated reviews.


    Lawrence Coburn
    CEO, RateItAll (www.rateitall.com)

    Lawrence Coburn

     Posted by: Lawrence Author Profile Page | September 17, 2009 9:03 AM



  • Sounds like there is a vast conspiracy going on here. I'm sure of it. A small group of impotent, brainless little twit-faced pee-ons are trying to suck the control and credit out of the hands of those greedy, power-hungry, know-it-all experts. Oh yeah. Treachery is afoot.

    By the way, I remember hearing somewhere that two’s company, but can’t remember what they call three. It’s right on the tip of my tongue … aww, shucks. I can’t remember.

    Nevermind.

    Back to your lives, citizens.

     Posted by: Eric Wilbanks Author Profile Page

    | September 17, 2009 10:27 AM



  • Agree with the previous commenter. The situations described in the research do not relate to a crowd as defined in Surowiecki's book.

    Stephen Rothman

     Posted by: Stephen Author Profile Page | September 17, 2009 12:02 PM



  • Re "Of course, we now know that simply isn't true. For one thing, Wikipedia isn't written and edited by the "crowd" at all" see http://www.aaronsw.com/weblog/whowriteswikipedia

    Posted by: Dan Brickley | September 17, 2009 12:26 PM



  • Muhammad, do you think Digg complies with those 4 points in the book?

    1. Diversity of opinion [ummmm]
    2. Independence [hmmmm]
    3. Decentralization [maybe]
    4. Aggregation [check, except it aggregates just a handful of sites, like Ars Technica and a few other ones popular with digg's power users]

     Posted by: Richard MacManus Author Profile Page

    Posted on FriendFeed

      | September 17, 2009 2:52 PM



  • Hi Richard,

    I don't think so at all. The problem with most of these sites is that they suffer from information cascades and people participating don't always have 'pure motives.'

    From Wikipedia:

    There are two key conditions in an information cascade model:

    1. Sequential decisions with subsequent actors observing decisions (not information) of previous actors.
    2. A limited action space (e.g. an adopt/reject decision).

    What happens on Digg is that a lot of stories get Dugg because a particular user submitted them, or particular users Dugg them, or they were from a particular domain, or simply because someone asks you to Digg something. Therefore decisions aren't made with merit in mind (or Suriwiecki's four conditions in mind) and instead popularity rules the day.

    I can't remember the name of the site but I recall there was a foreign version of Digg that didn't show you who submitted a story, that's a step up. Similarly Reddit doesn't show you how many votes a story has for the first hour. Things like that try to prevent information cascades.

    Muhammad Saleem

     Posted by: Muhammad Author Profile Page | September 17, 2009 3:13 PM



  • Great points Muhammad. That would indeed be interesting, if Digg hid the submitter name!

     Posted by: Richard MacManus Author Profile Page

    Posted on FriendFeed

      | September 17, 2009 3:42 PM



  • An interesting application of crowdsourcing are decision markets, see http://en.wikipedia.org/wiki/Prediction_market As a sidenote, it's quite ironical that wikipedia highlights the shortcomings of those markets in their entry - but in their defense they also include the criticism on their own bias in their entry http://en.wikipedia.org/wiki/Wikipedia

    I followed prediction markets for the US presidential election and they seemed spot on. There are numerous studies showing that they predicted the results more accurately than any http://www.semanticsincorporated.com/2008/11/90-of-an-obama-win-as-per-prediction-markets.html

    Limitations in predictions market are well documented (and include Muhammad's points above), and constrain their practical application to a well-defined number of situation.

    Crowdsourcing suffers from the same limitations, which is not a problem, as long as you limit its application correspondingly. The problem occur when you stretch it outside the required constraints and yet present the results as "scientific", i.e. as a good proxy for what the crowd thinks.

    That's what professor Vassilis Kostakos's theory ultimately comes down to (or should - I don't know, I haven't read his report). Apps like Digg or Amazon's review are not scientific applications of crowdsourcing, and thus their results should not be seen as precise representation of our collective thinking.

    On the one hand, I don't think they present themselves as scientific, so attacking them for that is a little controversy-seeking. On the other hand, they also don't advertise how much they rely on such a small group of reviewers, and they probably should - else they should correct their algorithm further to prevent it. Ultimately, I applaud professor Vassilis Kostakos for his role as a watchdog, and helping improve crowdsourcing applications... in a crowdsourcing kind of way.

    Posted by: Greg Boutin | September 17, 2009 4:41 PM



  • In business you have the 20/80 rule.
    In social website possibly there is a 1/99 rule or even less.
    Meaning a very small number of people actually generate content. From this small group a large part has a different agenda as the content they generate has a self-benefit purpose.

    Posted by: LEADSExplorer.com | September 18, 2009 1:18 AM



  • Hey I found this website called HTcity.desimartini.com , it has a good collection of Page 3 party pics.

    Posted by: Page 3 | September 18, 2009 3:50 AM



  • While I think it always important to question popular sentiment...I also think a crowd is defined primarily by its members participation. If you aren't contributing to some group, then your viewpoint may or may not be represented by others in that group.

    I think the interplay between different crowds (i.e., different points of view) has been frequently referred to as politics. Per Mohammed's fine comments (above), when individual viewpoints can be observed without information cascades and the influence of identity, you might be able to glimpse the actual wisdom of the majority at a given point in time. Of course accepting or rejecting that wisdom is a highly individual decision.

     Posted by: jeff hammond Author Profile Page

    | September 18, 2009 5:43 AM



  • Richard, Muhammad is right. Suroweicki is adamant that the four rules be followed for the wisdom of crowds to take effect.

    In addition to knowing who submitted something, Digg also breaks Rule #2 (Independence) by showing how many diggs something has. This is clearly social bias...Duncan Watts did an amazing study showing how much influence these numbers can have on someone's behavior: http://bokardo.com/archives/social-design-101-aggregate-displays-change-user-behavior/

    The idea of wisdom of crowds is that people make decisions independently, and then by aggregation we find out what the crowd thought.

    In general, the wisdom of crowds theory isn't as applicable as originally thought...lots of social systems defy the rules in order to drive positive engagement.

    Posted by: Joshua Porter | September 18, 2009 6:07 AM



  • An interesting array of comments, and concerns, which are fascinating in their own right. I've been cognizant of Surowiecki's work, and his four principles for assuring the ability to bring out the "wisdom of crowds" in addressing complex, highly charged topics, while avoiding the "dumbness" and group think so commonly seen. (The Bay of Pigs analogy is a useful one).

    In my work, I've considered and used the four principles as we attempt to pull together the types of collaboratives whcih will allow for the truly creative thinking necessary in changing health care. While aware of the controversy around Surowiecki's work (likely suggesting he's hit on some sensitive nerves), I concur with many of the comments, specifically those of Mohammed, and have found this a useful construct.

    Of course it is only one aspect of moving collaborative efforts forward. I've also considered the adaptive/technical leadership model of Ron Heifetz, Otto Scharmer's Theory U, and have used aspects of Group Genius as written by Keith Sawyer. An additional useful construct in the work has been incorporating the elements of Fair Process outlined by Kim in the Harvard Business Review in 1997.

    Just my take on looking at the convergences and synergies of different concepts and philosophies, and drawing from them. Of course, attention to the appropriate and attentive use of the concepts is essential to prevent the unintended consequences of misapplication of these concepts.....much more to learn.

    Posted by: Gary Oftedahl | September 18, 2009 6:18 AM



  • "Suroweicki is adamant that the four rules be followed for the wisdom of crowds to take effect."

    In other words a utopia

    Posted by: Don Ulrich | September 18, 2009 6:31 AM



  • What I'd like to see is non-fakeable metrics on ecommerce sites: return rates or reorder rates (as appropriate), for example. Or for apps, how many times users open the app per day/week or whatever.

    Posted by: Lysander Meath Baker | September 18, 2009 7:14 AM



  • This is not just about TWoC, but human behavior: a small amount of people have, are, and will continue to contribute disproportionately. This is neither a "good" or "bad" thing of itself; simply observed behavior. And it's nothing new, especially when we consider eccentrics (meant in the most complimentary sense) who possess incredibly intense amounts of focused knowledge, and thanks to the Internet, are able to easier share their encyclopedic bearings on retrogaming, weird stuff happening in Asia, or even the quintessential archetype of the sci-fi geek.

    Whether it's the proportion of blog commenters vs. blog readers, forum regulars vs. forum lurkers, or content creators vs. content consumers (like in virtual world Second Life), the "actives" are an incredibly devoted subset of the "passives". And someone who's very active on one site may be passive on others depending on how much they're able or willing to contribute. A CHOICE. Again, not a "good" or "bad" thing — many people enjoy watching YouTube videos rather than make them — this is just how it is, and clouding thoughts with an abstract "crowd" isn't helpful when what really matters are individual personalities. Sometimes collaborating, yes, but that's more vivid, peaky grouping (and certainly consistent with some of what Surowiecki suggested).

    Not everyone has an interest in being heard, but severe actives often possess a clear proclivity towards self-expression and being rewarded for it — not necessarily in cold cash, but peer recognition and feeling special. Maybe swag. After all, from academia studies to urban art, having your ideas supported is nourishing! An early trickle of reward, even encouragement from the creators or earlier stars may very well snowball and turn a casual hobby into being a major voice on a site like Wikipedia or Amazon's reviews. I've grown to this position multiple times in varied online communities, so through experience, I can relate to how the process develops. And THOSE are the stories I'm most interested in.

    As described in ambient "social media" books about reaching out to influential individuals instead of appealing to a faceless mass: power users (hence the name) will continue to wield wide impact over others, such as lesser-ranked currying their favor (like linking) to climb up the social ladder and learning how to ascend in the process.

    Torley Lives

     Posted by: Torley Author Profile Page | September 18, 2009 8:47 AM



  • Not to mention the fact that the "wisdom" of crowds does not bring out any objective truth, just a collective subjectivism.

    Posted by: Nathan Ford | September 18, 2009 9:50 AM



  • Joshua Porter's comment ++

    In a live talk I saw Surowiecki give shortly after the book's release, he began with the statement that while ants as a crowd become more "intelligent", humans--as a crowd--become dumber. The unfortunate legacy of his book's misleading title is that those who did not read it have still used it to support a "None of us is as smart as All of us..." approach based on collaboration and consensus that--for the most part--the book demolishes.

    Posted by: Kathy Sierra | September 18, 2009 2:02 PM



  • That's why I dont read books on technology and society trends written by journalists... they are all abou selling a good story, and miss out on the substance all to easily. Go for Duncan, Barabasi, Tapscott, people who actually have experience and research to back their opinions.

    I mean, the wisdom of the crowd actualy says the opposite of what you find on most of the material published on social networks: small world phenomena, centrality, hubs, power law distribution of connections, etc.

    Posted by: Filipe | September 18, 2009 3:30 PM



  • You miss the other key point: the Crowd isn't very clever http://www.itskeptic.org/folly-crowd

    Posted by: The IT Skeptic | September 19, 2009 1:09 PM



  • The wisdom of crowds does work. The best example I have seen on the web is http://www.rateyourmusic.com - it's not a site with any bells and whistles, but if you are a music fan go search for your favorite band, look at the list of albums and "crowd ratings" that come up, and see if the crowdsourced ratings match what you (or the critics at large) think are the best, and worst, records by a given band. I bet you'll agree.

    Similar things could be said about IMDB's user-voted films list. The system can work - though I agree wholeheartedly with the other posters who caution against some users with strong agendas polluting the results. Algorithms need to be a moving target. It's possible that rateyourmusic's accuracy has to do with it not being a very professional site and therefore off the radar of those who would try to game it.

    We've just launched an ambitious beta startup at http://www.ranker.com that will also harness the wisdom of crowds, though that feature has not yet been made user-facing.

    Posted by: Clark Benson | September 19, 2009 4:18 PM



  • The Guardian (uk newspaper) had online an article about people claiming their editing was undone on wikipedia, and the comments posted in response included eyewitnesses having their stuff deleted, and apparently you get higher ranked the more edits you make, many people compete, and the quickest edit is 'delete' so to get points fast, these types just delete every edit as soon as they find it. :(

     Posted by: aiammaia Author Profile Page

    | September 20, 2009 4:29 PM



  • Interesting article. However, I'd like to point out one fundamental necessary condition for "The Wisdom of Crowds" that is often overlooked (also in this article): Crowds are only wise when the members come up with their individual opinions INDIVIDUALLY, that is, when they are NOT influenced by what other people already said, wrote etc.

    For example, if twenty experts predict next year's inflation rate, then the average of their predictions will be more accurate when they DID NOT talk to each other about their predictions (as compared to when they did). In the latter case, their errors will become more similar because of the discussion and thus their errors are less likely to cancel out. The same logic also applied to voting procedures.

    Surowiecki's (2004) book "The Wisdom of Crowds" (and other work) clearly spells out this boundary conditions of "The Wisdom of Crowds". He (and other people in this area) don't simply claim that having a bunch of people do something will be inevitably wise. It's the independence of opinions that's key!

    So please stop bashing a strawman and actually criticize "The Wisdom of Crowds" as it actually was proposed.

    Posted by: Stefan Herzog | September 20, 2009 11:45 PM



  • Relying on crowd opinions can be helpful, up to a certain point. Sites like Yelp are usually pretty accurate when it comes to group consensus about local businesses. Voting on the helpfulness of a particular comment can help weed out (or at least minimize) reviews that don't contribute to the community in a significant way.

    Posted by: Michelle Barnette | September 21, 2009 10:50 PM



  • Yep, agree with all those who state that the research does not fit with Surowiecki's definition of the crowd. However, the research is interesting if linked to ideas of unrepresentative or illiberal democracy, as posited by Fareed Zakaria that suggests small interest groups can hijack democratic systems.

    Posted by: Rob Godden | September 22, 2009 3:40 AM



  • Leave a comment

    Optional: Sign in with Connect

    Facebook   Sign in with Twitter

    Twitter   Sign in with OpenID

    OpenID  |  other services

    The ReadWrite Real-Time Web Summit

    RWW SPONSORS
    hakia CONTEXA

    Groupsite.com

    Aplus.net

    Domain.ME

    Faroo

    Codero

    Crowd Science

    MediaTemple

    FOLLOW @RWW ON TWITTER

    follow @rww on Twitter
    ReadWriteWeb on Facebook

    RECENT JOBS

    Product Manager, Delivery
    Durham, NC
    iContact
    UNIX Systems Administrator
    San Francisco, CA
    The Talent Factor, LLC
    SAS Programmer
    Irving, TX
    User Interface Developer
    New York, NY
    Daylife
    aaaa

    Agony Aunt counsels BJP

    Agony aunt gives party leaders some tough love, because that’s how they like it!

    Q Auntyji, what lessons can we learn from the 2009 elections? – AJ

    maskDear AJ, as you know, the BJP’s political strategy is based on the Hollywood blockbuster The Mask (Hindi dub: Mukhota), in which Jim Carrey plays a shapeshifting charmer who gatecrashes parties, wins friends, and realizes all his dreams.

    But did you know that there was a sequel to The Mask, in which they tried replacing Jim Carrey with some other actor? I’m guessing you didn’t, because the movie was a colossal dud. Moral of the story is, if you can’t sign up Jim Carrey, don’t bother making the movie.

    Q Auntyji, Arun Shourie has compared us to Humpty Dumpty. Is he right? – VN

    Dear VN, that depends entirely on what Shourie wants Humpty Dumpty to mean. Perhaps he means someone who Humpties a party and then Dumpties it?

    Q Aunty, I’m a young soft-spoken MP who’s just been to jail for making a doctored hate-speech. I got elected by a landslide, but my party is giving me the cold shoulder. What’s going on? Earlier they used to go crazy over such speeches but now they’re looking at me like I farted in the elevator. It’s like the party’s been doctored or something. Anyways, I’m confused about where this party is going. I need answers fast or my mom will drag me to some other party. – Confused

    Dear Confused, Yeh andar ki baat hain. You must exercise extreme caution in your choice of underwear. You can pick either desi knickers or designer knickers. Never pick both, or the knickers will bicker. Too many people are wearing Calvin Kleins underneath their VIPs and finding themselves in a tight spot. Others are growing too fat in the party and, horror of horrors, exceeding their briefs. So you should really pick the right knickers. Remember that your party suffers both when it is out of garment and when it is in coalition garment.

    Q Dear Auntyji, I’m a senior party functionary. I have forgotten my gmail password. Can you help me? – SS

    Dear SS, all party functionaries use the same gmail password. It is “PM2014”

    Q Auntyji, there is too much confusion about Hindutva in my party. Please clarify – (anonymous)

    (b. 1923)

    (b. 1923)

    Dear Anonymous, As you must have heard by now, Hinduism is not a religion, it is a way of life. Hindutva, on the other hand, is a religion, not a way of life. Hindutva is the mother-in-law Hinduism got, when it was married off into the parivar. Unlike Hinduism, Hindutva has solid middle-class values and fixed opinions on everything from booze to skimpy clothes to article 377. Naturally, Hindutva frowns on many things Hinduism has been doing for thousands of years.

    spindutvaPlease note that “Hindutva” is currently out of fashion because it has few takers outside the parivar. The term has been superseded by “Spindutva,” or integral humanism. Spindutva is lucidly explained by Mr. Sudheendra Kulkarni in his bestseller Spindutva for Dummies. Mr. Kulkarni is an ex-IITian and an expert on calculus. He discusses at length why differential humanism is actually integral humanism.

    Q Auntyji, what happened at the recently held Chintan Baithak? – SNC

    SNC, contrary to popular belief, BJP leaders did not shy away from a brutally frank assessment of past mistakes. The past was in fact dissected and analyzed threadbare. Every issue was addressed and the necessary corrections made. The past is looking much better now.

    Q Auntyji, what is Advaniji’s contribution to Indian politics? – RSP

    Dear RSP, Advaniji will be remembered as a man who got on a chariot and spent months galvanizing the masses to do something that took him entirely by surprise.

    :D lol!