20 August 2010
Filed under Articles
If you haven’t seen it, there was a report that was released yesterday from NYU Stern and GWU Business Schools that sets out a methodology for determining the Digital IQ of Senators of the United States, and then proceeds to do so. You can find it here
Unfortunately, the methodology is not well described in the report, but from the information available, it appears to be a bit shallow. The good news is that the researchers have asked for comments on it, so here goes. First, the methodology as described in the report (pg 4):
Facebook – 25%:
- Number of Likes
- Like Growth
Twitter – 25%:
- Velocity of Tweets
- Follower Growth
YouTube – 25%:
- Number of Uploads
- Number of Channel/Upload Views
Online Buzz: Blogs – 12.5%:
- Velocity of Mentions on Blogs and Other 2.0 Sites
Site Traffic: – 12.5%:
- Annual and Monthly Unique Visitors
- Number of Visits
Not Enough Information
I’d really like to see the raw numbers and methodology here so I can better understand what’s going on.
- What is the scale that is being used? Clearly, they have established the Digital IQ to align with actual IQ numbers in terms of designating individual Senator’s capabilities, eg average is 100, over 140 is genius, but is this established by normalizing the distribution or is there a set scale this is being compared against?
- What is velocity? I can guess that it is the number of Tweets (or mentions) per time period, but it’s a term I’ve haven’t run across previously (perhaps I just haven’t been looking at the research closely enough).
- During what time frame was this analysis made?
Analysis of the Methodology
The self-stated goal of the study is (pg 4):
Digital IQ = A More Robust Democracy
Our thesis is that digital competence provides an opportunity for senators to authentically engage and mobilize voters and constituents. Key to managing and developing competence is an actionable metric. This study attempts to quantify the digital competence of the 100 U.S. senators. Our aim is to provide a robust tool to diagnose digital strengths and weaknesses and prioritize incremental investment in digital.
Now hold on a second. It seems to me that this methodology is primarily based on eyeballs, the traditional media gauge of effectiveness — to be explicit, the more people that see your stuff, the better your chance of converting them. Unfortunately, this is neither the goal nor the correct gauge to be applying if you are accurately attempting to assess a Senator’s effective ability “to authentically engage and mobilize voters and constituents.” Effective use of social media is about connecting, having conversations, and engaging in meaningful ways.
The majority of the factors in the methodology are nothing more than measures of how traditional campaign tactics have been applied to the digital world:
- Presence on Facebook, Twitter, and YouTube is important, but it only means you’ve shown up to the party – it doesn’t mean you know how to dance.
- Number of followers or likes (and the growth of these) is not much different from traditional polling methods. Are you making a lot of noise? People will follow you – sometimes they may not like you, but they want to know what you’re up to – there’s no commitment to even read anything afterwards. That measure in and of itself is almost meaningless. (In all fairness however, it is a mandatory pre-condition for being able to engage meaningfully.)Note: This shows the Facebook metric to be completely irrelevant to meaningful engagement – making only 75% of the Digital IQ valuable.
- Velocity of Tweets and Number of Uploads on YouTube are solely about the Senator’s ability to publish. Many of them will simply their press release rss feed to Twitter and push the same information through a new channel. This is not indicative of engagement.Note: This moves the Twitter metric into the same category as the Facebook metric – making only 50% of the Digital IQ relevant.
- The Online Buzz: Blogs section references a candidate’s ability to get press (not in the traditional sense, but it’s still getting written about) and takes into account sentiment – which I assume means if the writing is positive or negative about them. This has nothing to do with their ability to meaningfully engage their constituents and in fact doesn’t even measure anything that they would have to actively do themselves.Note: Making 37.5% of the score relevant to the stated goal.
- Site Traffic: This is web 1.0. It is possible to engage site visitors in meaningful engagement, but there is no measure of that going on here.Note: 25% relevant.
- The only factor in the methodology I have not berated is the Number of Channel/Upload Views on YouTube. Now this is not a complete metric for engagement, but at least it gets at the problem. This is tangible evidence that ideas and information being distributed by the Senator is actually being absorbed by the constituents. There is an implication here that if they’re watching the video, they care about what’s being said. This is a fundamental component of meaningful engagement.Note: Given that the YouTube metric has three components, I will give them all equal weight and arrive at a final relevance score of 8.3%. Not so good for something that’s being touted all over the political media and is representing the good name of New York University and the George Washington University.
Is this fair?
Well, not entirely. I have thus far completely demeaned the importance of the factors that have been measured: primarily the piece of mind to engage online and the ability to attract followers or likes or visitors to online spaces. This is the first step to being able to engage – you have to be there and you have to have constituents to engage with. Since the focus of the study is about the Senator’s abilities to “authentically engage and mobilize” however, I think accomplishing this first step should only account for 10% of the points that can be awarded in Digital IQ.
That means my relevance score has to go up from 8.3% to 17.5%. I still don’t think that means it passes.
What should be done?
I’m not going to pretend to have the answers, but I also know that studies like this are not actually helpful to improving citizen engagement.
Accomplishing what these researchers set out to do is not easy. Here are some thoughts:
- Better Metrics. I would investigate the metrics of companies like Klout, who claim to measure your influence on Twitter. There are a number of them, all with different methodologies that I haven’t spent much time looking at recently. I would imagine there are similar metrics or tools that could be used to analyze discussion on a Senator’s Facebook wall and YouTube channel. How often does the Senator (or their staff) respond to the messages there?
- Other Sites. There should be a category for effective use of sites beyond Facebook, Twitter, and YouTube. Some states may have a large following on MySpace or a local social network or discussion forum that the Senator uses very effectively. This needs to be considered.
- Distinguish between campaign and official use. There’s a difference for members of Congress and it’s important – without it, incumbents could use federal money and outreach for campaigns, which would unfairly imbalance elections. How effectively are they maintaining this distinction and what are they doing to move followers from one to the other. I don’t know how to accurately measure this factor, but it’s an important part of their digital literacy.
I’m sure there are many more people out there who have better ideas than I about how to establish the metrics that need to be created here, but I hope that it’s helpful in some way nonetheless.
If you’ve taken the time to read this, I’d really like to hear your opinion on it as well. Am I off-base or am I grasping some fundamental component of social media that was primarily unaccounted for in this study? michael kors tasche blau michael kors tasche blau