Mike
Dennison ([email protected]) compiled this thread on 20 December 2000
from a discussion on the Electronic Discussion on Group Facilitation:
[email protected] https://www.albany.edu/cpr/gf/
I have
tried to group the responses into a number of headings with the occasional
split of a response into two, otherwise
the thread is presented in chronological order and the responses are not
edited other than to remove duplicated
or redundant text. The groupings are:
The overall purpose
How many dots to
use
Advice on the
Method
Variations on the
method
Interim Summary
Caveats and
warnings - use voting with care!
Experience using
dots
Web and online use
Extending the
technique
Miscellany
Thank
you to all of you who contributed to a lively and enlightening debate.
In a
message dated 11/15/00 6:17:43 AM Central Standard Time, I posed the original
question:
Many
years ago I was introduced to a technique of using sticky coloured dots to
identify the most important items from
a list, a form of prioritisation. I generally hand out between 3 and 5 dots to
each member of the group. The number I
use (when I think on it) depends on the group size, the number of items and the number I would like to focus
on out of the group. (More items -> more dots, more to select -> more, more people -> less?).
The
rules are simple - there are none in that a group member can use all on one,
spread out or none or even trade their
dots for favours from other group members, just stick your dots onto the
(generally) flips against the items and
then we see how it all pans out.
It is a
technique that generally works well. However I mentioned it in a training
session and was asked what the
rationale was for chosing the number of dots. I vaguely remember the person
who introduced me to the technique
suggesting a rule of thumb but can't remember what it was. I can't remember who the person was either!
So I am
seeking the help of the vast combined powers of the discussion group. Have any
of you have used the technique, if so
have you got a formula or process for chosing how many dots to hand out or, if
not, how do you chose?
I have
often used "rounds" of multivoting.
What you're looking for is some
clear groupings: a few items
with twice as many votes as there are voters,
another grouping with about 1/2 to 2/3 that number, another grouping with
one to three votes, and those that didn't get any. If you don't get clear break points that show which are the
obvious high-priority items, you might
revote on the top half, or start
over with a different number of votes (usually
to get the spread, more votes are needed to
provide people with enough to
give a vote to each of the ones they think are important and a lot of
votes to their top choice -- hmm, that starts to sound like a rationale!!), or start over with a different prioritization technique like pairwise ranking, advantage/disadvantage
identification, nominal group
technique (the limited definition favored by TQM consultants, not
the full-blown technique), force
field analysis, or weighting and
prioritizing.
Most of
the processes I use don't require dots -- and I've gravitated to NOT using them
since often when I've been a
participant in dot voting, I think differently from others, my ideas from a
brainstorm don't get votes (or I vote
for ones nobody else does) and I feel cut off and tend to stop participating.
I usually also feel a little
railroaded.
So
note, this isn't commentary about how many dots to use when, but when and why
to use dots, if at all.
I now
realize my reaction was because we voted and moved on. (Kind of like you'd want
a national election to be!). I once
voiced this to to facilitators during a demonstration of using
computer-assisted facilitation. Their
comments were along the lines of NEVER using multi-voting to make a
decision. (On computers, you hope it's
not dots. My granddaughter used my 33-cent stamps on my screen once! :>) INSTEAD to use it to provoke
dialogue over WHY the results were what they were, which brings a lot more understanding. I suppose
it also depends on how "hot" the topic is and why you are voting.
One use
is to quickly see where you already have agreement. (as I believe Bob Pike
mentioned) and then discuss the rest.
As an example of bringing more understanding, I participated in using it
recently at an IAF ACT meeting (as in
Association Coordinating Team, meaning the Board). We were looking at the organization in terms of a model
someone was sharing. We then went up and in each of the 4 quadrants of the model (which had about 5
levels each coming out from the center of the page), put a red dot for where the organization is now
and a green one for where we'd like it to be in 2 years.
The
speckled result was amazing. The rest of our time (for this section of the day)
was spent asking each other questions, like when all the dots in one place were
green except for one red one, what was the person thinking? It was extremely
revealing and many new insights surfaced.
Thanks
for a chance to vent on this one. I
have at least come around to seeing how this technique could be useful!
Yes -
we use the dots according to a formula called the 1/3 plus one rule. Regardless
of the number of people each person gets dots based on the number of items on
the list. 6 items, for example, 1/3 of 6 is 2 plus 1 equals 3 dots. 9 items -
1/3 equals 3 plus 1 equals 4 dots. People vote for the items most important to
them, one dot per item. (You cannot give all your dots to one item. ***What
really makes this work is to have a brief discussion of each item before the voting begins, When the voting
is concluded there are generally
several items that every has selected. It makes it fairly easy to
accept these unanimously as
priorities. With those off the list you can then further discuss the remaining items to choose additional
priorities. ***I've had a board of
directors of 36 people build an entire strategic plan for the first time in 2
days using this process.
the
only "rule of thumb" I can recall that might relate is that when you
ask people to identify (or prioritise) a specific # of items, they tend to fall
short by one (they can't come up with the last one or choose a "last
one" among the remaining...), so if you want to get to 4 of something, ask
for 5, etc.
I use a
very (and I mean very) general Pareto process (80/20 rule) to determine how
many dots to distribute to each person.
I take the total number, divide by 5 (20%) and this determines how many
dots each person receives. I get to
provide a high level learning opportunity on the 80/20 Rule at the same time as
we are doing an exercise to reduce to the key issues.
I have
used that technique for years in workshops I run to allow participants to
choose activities. We call it
"dotmocracy". And I have
always had people rank their top 3 preferences (so, 3 dots each). If you give more out, it may be harder to
find a difference between items voted for, however. When I have done this with 3 dots, I am usually only looking to
choose the top 2, or maybe 3 items.
Perhaps that could be the rule of thumb (the number of dots roughly
equal to the number of items you will ultimately use in the workshop). It would also depend on the number of
options you were giving the group to choose among. I would also expect that going below 3 dots if there are a fair
number of items (no matter what number of items you choose) would be
frustrating to the group.
I use a
very (and I mean very) general Pareto process (80/20 rule) to determine how
many dots to distribute to each person.
I take the total number, divide by 5 (20%) and this determines how many
dots each person receives. I get to
provide a high level learning opportunity on the 80/20 Rule at the same time as
we are doing an exercise to reduce to the key issues.
Yes,
there is (kind of ) a formula that I use: ABS(number of items to prioritize /
number of participants) + 1
The
usual reaction of the participant is that there are not enough dots available
for them. The fact that they feel that way is an indication for me that they
really do have to prioritize. An it almost always works out well.
I would
take "ABS" to mean the "absolute value" which doesn't make
sense in this context since there are no negative numbers. Did you mean to
round up?
How
many dots do I get with 20 items and 10 people?
With 10
items and 20 people?
When
the outcome of the first part of the formula is positive you round up.
If it
is negative it automatically results in zero.
Thus
your two examples would result in:
(20/10) + 1 = 3
(10/20) + 1 = (0) + 1 = 1
And for instance
(25 / 10) + 1 = (3) + 1 = 4
In
Weaver & Farrell's book "Managers as Facilitators" they have a
page on multi-voting. Their guideline is that the number of votes
for each group member is equal to a
third to a half of the total number of items on the list. For example, if the list contains 30 items, then each
person gets 10-15 votes. Each member is given the same number of
votes and told to give more votes to
the projects having the most impact on the group's purpose. Set some guidelines for the maximum number
of votes allowed per item. For example,
15 votes could be distributed as follows: 5 for the first choice, 4 for the second choice, 3 for the third, 2
for the fourth, and 1 for the fifth.
I know
this technique as "multivoting."
Sometimes I use dots or other stickers,
sometimes I just let people use markers and use the honor system to make sure they don't use more than the
alloted number of votes. (But then I count up to see if all the votes were
used, or too many! Interesting to see group reaction -- but I digress.)
The
official CSC Workshop and Facilitation Techniques course calls this N/3 -- take the number of items being voted on,
divide by three, and that's how many
votes you get. I find that too
simplistic. As you do, I vary the number of votes based on the same
factors. I don't think there's a
"theory" or
"rationale." It's more like
making a sandwich: you look at the bread,
meat, and cheese, and sort of "know" how much mustard to
use. If it doesn't taste right, you adjust the amount next time.
I have
often used "rounds" of multivoting.
What you're looking for is some
clear groupings: a few items with twice as many votes as there are
voters, another grouping with about 1/2
to 2/3 that number, another grouping with
one to three votes, and those that didn't get any. If you don't get clear break points that show which are the obvious
high-priority items, you might revote on
the top half, or start over with a different number of votes (usually to get the spread, more votes are
needed to provide people with enough to
give a vote to each of the ones they think are important and a lot of votes to their top choice -- hmm, that
starts to sound like a rationale!!), or
start over with a different prioritization technique like pairwise ranking, advantage/disadvantage
identification, nominal group technique
(the limited definition favored by TQM consultants, not the full-blown technique), force field analysis,
or weighting and prioritizing.
Note
that some people use multivoting or N/3 or voting by dots to pick the top one.
That's not the purpose: the purpose is find the groupings of high, medium, and low priority. Also too restrictive is the tendency of
some people to label this a
"problem solving" technique and use it only to pick the high-priority problems to work on after
identifying all possible problems.
I too
use colored dots for prioritization.
This is a weighted vote the group participates in. I hand out 6 dots and tell people to assign
(vote) 3 dots to the most important item in their judgement, 2 to the next most
important and 1 dot to the third most important item. They CANNOT vote all 6 for their top priority; they must distribute
them among their top three! It forces
them to make some judgements and decisions.
Source: I have no idea!
When I
have used sticky dots for prioritization, it has usually been for action-planning. I have generally used 4 dots...2 of one color and 2 of another.
One color signifies an issue related to general prioritization; in other words, "I think this is
important. People ought to care about
this issue." The other color denotes commitment, i.e.,
"I am willing to personally devote
time and energy to this issue." It
quickly becomes clear to a group which
actions in a field of options have a valid chance of becoming a reality...and also why some issues always seem to be a
priority but never get off the dime. If the head says "yes, we should"
but the heart says "I don't want
to," nothing ever happens.
Thanks
all for your stimulating and informative responses. A lot in a short time!
My very
short summary to date (skimming only so far!) is:
* lots of you use
dots or similar methods
* there are a
number of variations, mainly to do with the purpose
* in general no
firm rule on how many dots, pragmatism and experience rule
* some
purposes/variations do determine how many
* concerns
expressed over voting and its meaning
* concerns
expressed over what happens next
I will
collect the thread in a word doc, if anyone out there wants it (it will save
you tidying it up) let me know by private email and I will send back when the
flood of responses has dried.
I
perhaps should have mentioned that "voting" or "dotting" as
I usually use it is just one way of seeking a view from the group when faced
with a list of items that need to be addressed and where too little time is
available to give them all due attention during the workshop. I don't see it as
a closing down process or one that discards the non-selected items but rather
one of helping the group to focus its limited time on things it perceives as
important.
My task
is then to facilitate the next stage - what does it (the outcome of the
dotting) all mean and does it help the group move forward and what do they want
to do about the ones left over. Other uses - e.g. getting a high/medium/low
priority grouping or establishing a ranking - will of course move forward with
their own version of the process.
One
more addition - I have sometimes (again depending on the purpose and the
group's needs) have added different coloured dots to allow group members to
express particular views, such as Must Dos or Mustn't Do (veto). Again this
provides material for group consideration, they are not taken without
exploration of why.
Look
how many people on the [GF] list had insights into dotmocracy/multivoting!
All
those tips about how to make it work properly are really helpful. For anyone out
there thinking about using multi-voting for the first time, save those emails
and heed the advice!
Allow
me add a different colour to the discussion...
Look at
the use of multivoting from the other side for a moment, not as a facilitator
or academic. My only experiences with multivoting AS A PARTICIPANT have been
negative.
Why?
Because voting is still voting, no matter how many dots you get.
Voting
is not democracy;
Nor is
it thinking things through;
Nor is
it dialogue or participation.
Voting
can only be a helpful, almost superficial tool to assist with making decisions.
The greatest strength of voting is the ability to identify winners and losers.
(Just ask Big Al and GW about their Florida experience.) As far as I'm
concerned, creating losers is anethema to facilitation.
So let
me offer two caveats regarding multivoting.
A) Use
it sparingly, typically at either "end" of a decision:
1) to
validate a decision that has been thought-through already and just needs
legitimizing, or
2) as a
light "tool" to initiate a more probing conversation. Don't leave
participants thinking that multivoting is the end... Process/discuss what the
dots visually illustrate for the group. B) Use it infrequently, so that
participants don't have a chance to start "working the system," as
with any traditional voting situation. Let it be a helpful novelty.
I'm not
suggesting that multivoting not be used. Just be careful with it.
Note
that some people use multivoting or N/3 or voting by dots to pick the top one.
That's not the purpose: the purpose is find the groupings of high, medium, and low priority. Also too restrictive is the tendency of
some people to label this a
"problem solving" technique and use it only to pick the high-priority problems to work on after
identifying all possible problems.
Voting
with dots is often a good way to intensify a conversation and, to me, a rather
unsatisfying way to finish one. All that work toward consensus and then to
finish with a vote - seems like a bit more effort on the part of facilitator
and the group would take the discussion to another level.
Using
it after some options have been identified and discussed is a way of seeing
taking a snapshot of the group mind in a somewhat objective manner. The key seems to be having a very clear
question and a dot system that is easy to use and difficult to manipulate. In many ways, it doesn't matter precisely
how you do it as long as it makes sense in the situation. You want it to
reflect a genuine sense of "where the group is" at that point in the
conversation. It's like a
"straw" poll.
Doing a
count helps those members of a group who strive for order, rationality and
fairness. A discussion on what is
revealed by the voting (or "marking") procedure can surface the
deeper values, principles and criteria that can be used positively in enabling
a group to get through to a consensus that has some real teeth in it. Marks or
dots are a good way to out the second level reactions and responses that enable
a group to get to substance.
No
recounts - - - - please.
In the
context of voting for importance of individual items within a complex set of
inter-related items, a recent observation has been made that relates to the way
individuals sense importance (see www.cwaltd.com/index1.htm and select
"Identifying The Truly Effective Priorities In Complex Situations: The
Erroneous Priorities Law").
My take
on this new research is that we, as members of our species, do have an innate
ability to collectively SENSE immediacy of some problems and bottlenecks
generating additional problems, however, we do not appear to have an innate
ability to collectively SENSE leverage.
In
instances where a set of issues needs to be resolved as a unit, identifying and
selecting highly leveraged items from the list may be argued to be of greatest
aggregate priority. For this reason,
voting (while powerful and essential from a sociological context) may be
misguiding if not framed to show the audience where the leverage across the
system of issues lies.
(And
here we are discussing the structure of the ballot, again??)
I don't
mind you using my contribution. I would
like to suggest that any voting method including dots is, in my mind, to be
avoided if possible unless it is used to stimulate discussion; It should not be used to make choices, in my
opinion. When it is used to make
choices, it has the same problems of all forms of voting, minorities, win-lose,
etc. Part of the problem is that it
tends to create the illusion of creating a satisfying result.
I
believe going through the "struggle" of coming to a consensus is much
more effective in the long run than reaching a decision. I do use dots to test the waters but never
for the purpose of choosing. It can be
used to clarify where issues are, where more work is needed to reach a
consensus, etc.
When
dots are used as a decision making method, it is always a way of selecting
between alternatives. There are many
better ways of doing that.
Sandy,
I like these
two additions to the process. The first
has hints of stratification, a useful statistical technique, and Kepner-Tregoe
weighting. The second broadens out the
field of view in an important area.
Having
said that, I guess I'd caution about putting too much faith in the outcome of
such dotting or voting. Voting can give
a quicker view into the energy of the group--what they think, where they're
committed, but it doesn't necessarily give great results. Read Doerner's _The Logic of Failure_ for
examples of how we all can be misled by our gut feelings.
I do
use dotting and other voting techniques to get a sense of the group, and I
think "better" voting techniques are preferable to "worse"
voting techniques, but I suspect there's a limit beyond which increased
accuracy in capturing people's feelings is overshadowed by underlying
challenges in working through the issues.
That doesn't say the participatory aspect disappears and we leave the
rest to the "experts"; it may say we need to involve other expert processes
which are focused on the system, not just people's impressions of the system.
I came
back recently from a long vacation and in haste to clean up my email database,
trashed a lot of stuff unread. As the dotty voting topic goes on and on, not
unlike the one in Florida, I wish I looked at the early discussion. I've
written before in this forum on my distaste for voting and multi-voting as a
solution to most issues faced by facilitated groups. Once we start involving
complex formulae, weighting factors, etc., we are far too open to getting an
answer that most of the people are still not happy with. (hmmm -- again like
Florida, I fear.)
Too
often, voting, multi-voting or similar decision processes do not rely on the
logic of the situation as much as on the emotion of the group, and often result
in the selection of a solution that the team is willing to take on, or wants to
take on, or thinks the boss wants to hear, rather than one that should be taken
on because its elimination can be logically demonstrated to lead to the desired
effect(s).
In the
three questions that management, or a team, or an individual has to answer
about a situation...
What to change?
<http://www.focusedperformance.com/what-to.html>
To what to change to?
<http://www.focusedperformance.com/to-what.html>
How to make the change
happen?
<http://www.focusedperformance.com/how-to.html>
...the
most important, and too often, the one given the least attention is the first
one -- WHAT TO CHANGE. Voting usually focuses on solutions - the to-what's and
how-to's without giving due respect to the core issues that need to be
addressed.
If I
have to choose, I'll take logic over democracy any day, in a problem solving
situation. If the logic is good, the collaboration will come without voting.
One use
is to quickly see where you already have agreement. (as I believe Bob Pike
mentioned) and then discuss the rest. As an example of bringing more
understanding, I participated in using it recently at an IAF ACT meeting (as in
Association Coordinating Team, meaning the Board). We were looking at the
organization in terms of a model someone was sharing. We then went up and in
each of the 4 quadrants of the model (which had about 5 levels each coming out
from the center of the page), put a red dot for where the organization is now
and a green one for where we'd like it to be in 2 years.
The
speckled result was amazing. The rest of our time (for this section of the day)
was spent asking each other questions, like when all the dots in one place were
green except for one red one, what was the person thinking? It was extremely
revealing and many new insights surfaced.
"Dot
Technique" has long been a favorite technique of mine because it creates a
visual element to the voting/prioritizing/selection process which has helped me
many times.
As we
move online, I've been experimenting on how to use it online, but have not been
satisfied.
Has
anyone taken the "voting with dots" technique and figured an
effective way to use it online in web-based discussion areas?
Nancy
White asks:
Has
anyone taken the "voting with dots" technique and figured an
effective way to use it online in web-based discussion areas?
Bernie
DeKoven replies:
Actually,
as a matter of fact, if webconferencing and group whiteboards are considered
"web-based", then, well, yes, most definitely.
What I
like about the DOTS exercise is that it gets people talking to each other. It
gets them off their individual perspectives and mingling amongst the merry
multitudes.
I've
found a similar affect when using a poll (live, interactive, real-time polling
like that offered by PlaceWare and WebEx etc.) and electing to show people the
results of their votes as they are voting (usually you keep this information
hidden until the poll is closed). I like to keep it open for maybe 5 minutes,
letting people change their vote as often as they want until the discussions
are over. This seems to work very nicely in a most dot-like manner. I can run
several polls in sequence if I need further refinement, though usually one
every 15 minutes keeps things going.
Using a
shared whiteboard -- one that allows all participants to use the mark-up
capabilities simultaneously, you can get similarly dot-related conversations
going.
Bernie,
Nancy,
I was
about to suggest things like the PlaceWare voting, too, when I realized it
missed one vital part of dotmocracy we don't have in US national elections
(with hopefully very limited exceptions): casting multiple votes for one
issue. I see how you can do that with
a shared whiteboard, but have you found a way to do that with PlaceWare's (or
Astound's or ...) voting that's effective in letting someone place all their
emphasis on one spot or spreading it around on multiple?
And I
agree: the important part is the dialog it surfaces, not necessarily the
numerical winner.
No.
PlaceWare, Astound voting allow you only to create single choice votes. But
it's easy to create more on-the-fly. I agree, it's only a virtual approximation
of true dotting.
Bernie
DeKoven <[email protected]>
No.
PlaceWare, Astound voting allow you only to create single choice votes. But
it's easy to create more on-the-fly. I agree, it's only a virtual approximation
of true dotting.
I was
thinking, if you had key pads, you could give everyone three key pads. I'm sure there's something analogous for
other systems, like giving everyone multiple logins.
At
Wednesday 15-11-00 07:17 -0800, Nancy White
wrote:
> As
we move online, I've been experimenting on how to use it online, but have not
been satisfied.
>
Has anyone taken the "voting with dots" technique and figured an
effective way to use it online in web-based discussion areas?
Hi
everyone,
Regarding using dotmocracy online, I
recently facilitated a group where we did much of our work over the Internet by
email, an Internet discussion forum (www.ezboard.com), and by conference call.
We used a dot-like process as well. All the online work was done asynchronously,
with people working on their own time and responding to tasks by a deadline. We
were never on-line all at the same time. I'm not sure about Nancys
experimenting--was it with realtime Internet forums and all participants logged
on at the same time?
In our process, the group had come up with
55 potential issues for a multi-million $ investment program. The 55 issues
fell into 6 groups. One of our tasks was to come to consensus on the highest
priority issues. So we used dotmocracy to get an idea of people's preferences
for priorities.
The interesting departure from the N/3 rule
was that I wanted to get some really good discrimination between the high and
low priority issues. There were 7 voting members on the planning team: 5 from
industry and 2 from government. I assigned a total of 1,000 dots or points,
with industry getting 500 (100 each person) and government getting 500 (250
each).
The rules were: a) spend all your point
allocation, b) no more than 25% of your total points in any of the 6 major heading
groups, and c) no more than 5% of your total points on any one issue. Rule (b)
was so we had top priorities in each of the 6 heading groups and none of the 6
group headings would be eliminated. Rule (c) was so that one person could not
load points on one pet issue and give it more prominence than warranted across
the group. Rule (c) was specific to this particular process. In other
processes, I haven't used rule (c).
To carry out the assignment of dots or
points, I distributed a spreadsheet with the 55 issue titles arranged in a
column. The voters allocated their points in the adjacent column. There were
additional "check-sum" columns to give people instant feedback on how
many points they had spent and the distribution of their points so they didn't
violate our 3 rules. And the spreadsheet had a page for instructions.
People sent me their points allocations and
after collating them, I distributed the results, using the same spreadsheet
layout. The resulting points distribution led us into a discussion of people's
comfort with the outcome--were there any surprises, any anomalies? On reviewing the voting outcome, we agreed
to drop any issues that received less than 15 points. In the 15-20 point range,
there was clear gap (discriminant analysis?).
As others have written, the voting was just to get an idea of the
group's preferences without anyone committing to the outcome of the vote (hmmm,
Florida?).
In this case, total time for the voting
process was about a week, mainly because each of the 7 planning team members
needed to consult with their constituencies and hold their own planning
meetings to "spend" their points. Once I received their points
spreadsheet, I turned out the collated version and written analysis within a
couple of hours, then posted them on the internet discussion forum.
Nancy
White wrote:
Has
anyone taken the "voting with dots" technique and figured an
Effective way to use it online in web-based
discussion areas?
www.zaplets.com
has some interesting tools (although somewhat simple and limited) for short
turnaround asynchronous uses.
A) You
can use the Advanced Poll feature to select up to 12 options that any size
group can vote on, and you provide all members (via email) of the group from
one to 12 "dots" or votes. Downside, participants cannot use more
than one dot (vote) on an alternative. And of course, this only works if you
are priotizing 12 or less alternatives. A nice feature is that you can also
show the results in a pie or bar chart, AND you can specify if the results are
available to the entire group, AND if the voting is anonymous or open.
B)
Zaplets also has a spreadsheet feature you can use for assigning tasks and
other items.
C)
Another feature is the brainstorm/discussion zaplets which allow everyone to
comment on a single problem or idea and even provide attachments up to 2 MB.
Rather than having to run down a multitude of comments in a typical listserv or
egroup, they are all in one place or "post", grouped together in one
file (sort of;-), which makes for some convenience.
Gayle
Gifford wrote:
I
always wonder in multivoting why people choose what they do - and are they
making the best decisions.
I have
an interest which I believe is similar. I am about to conduct a vote based on
submissions to an employee idea generation program. The concern is how to
determine which ideas are "best" to work on. Obviously there are
problems such as people supporting ideas without commitment to working on them,
balancing between specific and general ideas, allowing support for
"minority" ideas that some people might be allowed to work on, etc.,
etc.
I had
originally thought of a ballot with five choices: (1) Yes, do it; (2) Yes, and
make it a priority; (3) Yes, and I will volunteer to work on it; (4) Maybe,
let's get together and talk about it; (5) Maybe, but I need to hear more to
understand and evaluate it; (5) No, don't do it. Pre-testing has tended to confirm the obvious here: this is too
complicated.
I
therefore thought that there should be a Yes-Maybe-No initial vote and then
some follow-up votes. Ideas and advice on how to conduct such multivoting would
be welcome. There figure to be between 100 and 200 ideas, and there are 24
voters. I do want to begin with unedited ideas (so that people can see that
ever idea submitted was considered), which means that repetition is a problem
and prioritization is even more difficult.
James
Murphy <[email protected]
wrote:
I do
want to begin with unedited ideas (so that
people can see that every idea submitted was considered), which means
that repetition is a problem and
prioritization is even more difficult.
With
any form of voting or multivoting for prioritization, repetition of ideas is a
danger because it will dilute the votes for ideas that are repeated. A couple of things you could try:
Do an
affinity. Put all the ideas on cards or
sticky notes or whatever and have the "voters" put them in groups,
then multi vote on the groups. THis is
probably the fastest way to deal with a large number of ideas.
Put all
the ideas on flip charts, let people read them, then let them suggest which
ones should be combined. Get consensus
on the combining. Watch the group dynamic: are there some people trying to get
unrelated ideas combined so they get more votes? Are there others fighting a natural grouping so the vote for an
idea they don't like is diluted?
Use
nominal group technique (again, the limited definition). If there are 100 ideas, everyone gets a list
of the 100 ideas, each lettered "a" through "zzzv" (I think
that works out to 100) and a sheet with the letters "a" through
"zzzv" on it. Each person
gives each idea a number from one to 100 -- each number only used once, one for
the most important idea and 100 for the least important (I've never tried this
with more than 15 to 20 ideas. THeoretically it should work, but you might have
to give the participants a lot of time to work it out. You might want to give them a sheet with the
numbers 1 to 100 that they can "tick off" when they've used a
number). Then total up the numbers put
next to each idea -- you might need a spreadsheet and someone to help with data
entry. (Bernie et.al.: any ideas for
automated tools here?) The idea with
the lowest number is the highest priority.
Theoretically, even if ideas overlap or are repetitive, if they are high
priority, they will each get a low number.
That will solve the repetition problem.
Then you can take the high-priority ideas and do some grouping.
On Wed,
15 Nov 2000, James Murphy wrote:
I had
originally thought of a ballot with five choices: (1) Yes, do it; (2) Yes, and make it a priority; (3) Yes, and I will volunteer to work on it; (4) Maybe,
let's get together and talk about it;
(5) Maybe, but I need to hear more to
understand and evaluate it; (6) No, don't do it. Pre-testing has tended to confirm the obvious here: this is too complicated.
This
ballot represents different types of cognitive tasks:
a)
approval or dissaproval (combined with relative importance) (choices 1, 2, and 6)
b)
allocation of personal resources (choice 3)
c)
assessment of need for more information (choices 4 and 5)
In
general it is beneficial to differentiate cognitive tasks and to avoid tackling
more than one at a time. This basic
idea is one of the fundamental strengths of Nominal Group Technique and many
other group facilitation procedures.
On Wed,
15 Nov 2000, Ned Ruete wrote:
Use nominal
group technique (again, the limited definition). If there are 100 ideas, everyone gets a list of the 100
ideas, each lettered "a"
through "zzzv" (I think that works out to 100) and a sheet with the letters "a" through
"zzzv" on it. Each person
gives each idea a number from one to 100 -- each number only used once, one for the most important idea and
100 for the least important
Ned
noted his use of the "limited definition" of Nominal Group Technique
(NGT). Perhaps the specific procedures
of NGT are relevant to this discussion.
As described in the original article NGT is a six stage process which
carefully separates major cognitive tasks:
1)
Silent generation of ideas.
2)
Round robin recording.
3)
Serial discussion.
4)
Preliminary vote.
5)
Serial discussion of the master list
a. Clarification
b. Discussion of the
preliminary vote and relative importance
c. Additions
6)
Final vote
The
manner of conducting the vote is similar to what Ned described but limited to
assessing the priority of the top five items.
For
more information see:
Delbecq,
Andre L., Van de Ven, Andrew H., and Gustafson, David H. (1986). Guidelines for conducting NGT meetings. In Delbecq, Andre L., Van de Ven, Andrew H.,
and Gustafson, David H., Group Techniques for Program Planning: A guide to Nominal Group and Delphi
Processes. Middleton WI: Greenbriar Press.
When I
am faced with these long lists, I sometimes use a technique borrowed from the
equipment reliability field called FMEA (Failure Modes and Effects
Analysis). One of my current fears is
that brainstorming data is more and more becoming "The Data" for
projects and initiatives, versus my training that teaches brainstorming data can
suggest where to go look first, but still requires the rigor of data
examination. One way I combat this is
using FMEA.
The
technique works very well with lists above 25 items, and begins to lose
something when it gets to around 100 (I think). Anyway, we make a simple column list of each item, then assign
3-6 other columns to contain the next layer of granularity to the brainstorm
data. These columns will contain the
features desired by the problem solving team such as speed of implementation,
cost, human resources required, benefit or impact, ease of maintenance, return
on investment, etc. Basically whatever
you value for decision making. Caveat
here: After 3 columns of features, each
column's individual strength begins to wane; beyond 5 columns, no single column
will carry much weight at all.
I
generally assign a common Likert-like scale to each feature, using 1 through
5. For instance, a 1 would cost more
than $10k, a 2 $10k-5k, a 3 $5k-1K, a 4 $1k-500, a 5 less than $500. Once again, the team can choose some levels
of greater sensitivity to give more (or less) weight to a feature than
others. Cost may be a minor factor
(yeah, right) and we can set the scale to give a high value (5) even if the
cost is high (a 5 becomes less than $5K vs. less than $500), or we may require
a fast ROI, thus making a 5 there equal to a less than 6 month ROI versus a 1
year ROI if ROI were less important.
Lately
I've added one more column.... there is usually at least one champion for each
idea on the list, so I ask the team to decide on a confidence level, expressed
in percentage ("I'm 60% confident on that one, folks.") that the
stated benefit will be realized. This
way, a fancy idea with little chance of real success loses some weight (still
experimenting with this one to see the unintended consequences). The factor is still just a SWAG, but I also
use it as a learning tool to make each pass through the tool better, so we
review those intervals and how right we were.
The
last column is called the RPN (for Risk Priority Number, that reliability
thing), which is the sum total of the other columns, multiplied by the
confidence interval for that item.
Using the highest values as the 'best' ideas to go after first, this
creates a prioritized list of your items. You should do a physical check of the
list to make sure nothing inadvertently was omitted, for closure.
Yes, I
am an engineer.
I have
a simple EXCEL spreadsheet I have used to roll up the data and will respond
favorably to private requests for it.
On Wed,
15 Nov 2000, Ramsey, Kyle W wrote:
When I
am faced with these long lists, I sometimes use a technique borrowed from the
equipment reliability field called FMEA (Failure Modes and Effects Analysis).
The
method Kyle described is a form of multiple-criteria decision making, which is
a field of study that overlaps with the fields of decision analysis and
decision modeling. See for example the
International Society on Multiple Criteria Decision Making
http://www.mit.jyu.fi/MCDM/homepage.html
There
are a number of software packages that are designed for this type of analysis,
for example:
Logical Decisions
http://www.logicaldecisions.com/
Hiview
http://actinic.easyspace.com/acatalog/enterprise_lse_co_ukq/
... These columns will contain the features
desired by the > problem solving team such as speed of implementation,
cost, human resources required, benefit
or impact, ease of maintenance, return
on investment, etc. ...
In such
analyses it is often useful to separate "benefit" and
"cost" criteria. There may be
multiple benefit criteria, which may be integrated via differential weighting,
and multiple cost criteria, which may be integrated via differential weighting.
The aggregated benefit score and the aggregated cost score may be integrated in
the form of a benefit/cost ratio.
These
approaches are extremely valuable in complex and controversial situations and
where multiple types of expertise come into play.
On Sat,
18 Nov 2000, Tony Wong wrote:
The 55
issues fell into 6 groups.
The
following alternative approach is complex and is useful only to manage a
similarly complex situation. It is
particularly well suited to situations where each sub-group has special
expertise that makes it legitimate for them to work on their own categories.
Also, it requires a tolerance for using numbers to express subjective values
and tradeoffs.
Each
category is assigned to a sub-group.
Each sub-group arranges the items in its category in priority
order. They express the relative
importance of the items more thoroughly by distributing 100 points among the
items. The highest priority items should
receive the most points.
The
subgroups report back to the large group, explaining the items and their
priority. The large group then
determines the relative importance of the categories. The relative importance of each category is expressed numerically. To determine the relative importance of each
item, the points for each of the items within a category are multiplied by the
relative weight of the category.
A
further elaboration of this technique is to estimate the cost for each
item. Costs can be actual dollars or
staff time, and/or subjective estimates of "costs." With this additional information, priorities
can be proposed based on the ratio of benefits to costs. The purpose is to make resource allocation
decisions based not only on what is most important or beneficial, but also on
the amount of resources that will be required.
For
example, it might be more effective for an organization to invest in a large
number of initiatives that have very high benefit/cost ratios, not because
their benefit scores are relatively high, but because their costs are
relatively low. Without this kind of analysis, an organization might invest in
one, or a very few, high priority but costly initiatives that will consequently
consume all of the available resources.
Most prioritization exercises focus on the
"benefit" or "importance" aspects of the available
alternatives. It is extremely valuable
to give as much attention to the costs.
Just an
observation. When I lead a group in
organizing a brainstormed list, when about 1/3 of the list is in clusters no
new clusters develop usually. So maybe there is a "rationale".
Deb wrote:
When I
have used sticky dots for prioritization, I have generally used 4 dots...2 of one color and 2 of another. One color signifies an issue related to general prioritization;
in other words, "I think this is important. People ought to care about this issue." The other color denotes commitment, i.e., "I am willing to personally
devote time and energy to this issue." It
quickly becomes clear to a group which actions in a field of options
have a valid chance of becoming a
reality...and also why some issues always seem to be a priority but never get off the dime. If the head says "yes, we
should" but the heart says "I
don't want to," is it any wonder progress can be slow going?
This is
useful. Any pair of values can be done
this way. In fact, more than 2 values
can be done. Of course, be careful that
the number doesn't create more confusion.
Deb,
I
appreciate your clarification of the two levels of prioritization. I always
wonder in multivoting why people choose what they do - and are they making the
best decisions. Or, the issue that only one person votes for, but that person
is the one with all of the energy and drive and passion for moving that item
forward, and the item itself seems to be the most strategic choice. How do
others balance the power of the vote against choosing the most strategic
issues, actions, needs driven by the data?
BTW,
I'm not sure where the name came from, but we call this process 'dotmocracy'.
Have
they tried these dots business in Florida?
[Sorry,
couldn't resist]
Bill
Harris Having said that, I guess I'd caution about putting too much faith in
the outcome of such dotting or voting.
Voting can give a quicker view
I
agree. A wise person once told me, "After your group votes (dots, etc.), run
the result by your brains. Even though you have been "objective", the
result may not make any sense!"
End