Moderator: Holly Reynolds Lee
October 21, 2008
2:00 pm CT

The transcript is from a recording of a teleconference training conducted by the SAMHSA Resource Center to Promote Acceptance, Dignity and Social Inclusion Associated with Mental Health, also known as the SAMSHA ADS Center.  The views expressed in this training event do not necessarily represent the views, policies, and positions for the Center for Mental Health Services, the Substance Abuse and Mental Health Services Administration, or the U.S. Department of Health and Human Services.

Coordinator: Welcome and thank you for standing by. All participants will be on listen-only until the question-and-answer session of today's conference.

I would also like to remind participants that the conference is being recorded. If you have any objections you may disconnect at this time. I would now like to turn the call over to our first speaker, Miss Holly Reynolds Lee. Ma'am you may begin.

Holly Reynolds Lee: Hello and welcome to the training teleconference Evaluating Programs to Improve Social Acceptance of People with Mental Health Issues. This virtual training session is sponsored by the SAMHSA Resource Center to Promote Acceptance, Dignity and Social Inclusion Associated with Mental Health, also known as the SAMSHA ADS Center.

If you would like to reach us, you will find our contact information listed on slide 2. Our presentation today will take place during the first hour and will be followed by a 30-minute question-and-answer session.

At the end of the speaker presentations you may submit a question by pressing star 1 on your telephone keypad. You will enter a queue and be allowed to ask your question in the order in which it is received.

On hearing the conference operator announce your name, please proceed with your question. After you've asked your question, your line will be muted. The presenters will then have the opportunity to respond.

If we do not get to your question please feel to email the ADS Center and ask questions or follow up. We can put you in touch with the speakers of today's event.

Within 24 hours of this teleconference, you will receive an email request to participate in a short anonymous online survey about today's training. Survey results will be used to determine what resources and topic areas need to be addressed by future training events.

The survey will take approximately 5 minutes to complete. Before we begin please let me say that the views expressed in this training event do not necessarily represent the views, policies, and positions for the Center for Mental Health Services, the Substance Abuse and Mental Health Services Administration, or the U.S. Department of Health and Human Services.

We have two speakers for today's call. The first one is Pat Corrigan and then Jon Delman. Pat, I'll turn it over to you.

Pat Corrigan: Thank you. I want to spend the next 20 minutes talking about how do we measure anti-stigma programs because I know there's many people on the phone who want to come up with easy strategy by which they can collect data and see if their anti-stigma program is having any kind of effect.

And so let's talk about what in particular we're going to address. I want to ever so briefly because I'm sure the audience here knows about stigma but ever so briefly look at the issue about stigma and social injustice.

Or of even more importance on the flip side, is to promote inclusion and empowerment. Again very briefly, to review ways to address stigma and empowerment, it's important to have some idea of how you look at changing stigma before we talk about ways to measure it.

And then end up by talking about how to assess anti-stigma programs. I'm going to put this in perspective. I turn the listener to a book by Sam Keen, who's a sociologist, talking about faces of the enemy.

And in this book he reminds people that through the time of mankind we've used media of different sorts to represent down groups through representing groups that are stigmatized in a (disrespective)1 manner.

So we all know of presentations of people of color, of African Americans, has been very disrespectful, very horrid as to the way they're representing them. It's important to realize that stigma of mental illness is on the same continuum.

This is an example from the New York Post, says "Freed Mental Patient Kills Mom," obviously, very obvious disrespectful image of people with mental illness. For this one, this tends to give you the flip side, not representing people with mental illness as dangerous, but representing people with mental illness as big children—in this case a movie by Christopher Lloyd, Peter Boyle, and Michael Keaton where there are people with mental illness and they escape their hospital and they have this kind of zany wacky day around the neighborhood as a result.

In some ways, we might say the stigma of ethnic groups for example or women has improved, and one way it has improved is you wouldn't see the media promoting disrespectful images of women for example, that women jokes or ethnic jokes are really not appropriate and those kind of things have gone away. But on the way home tonight, you can listen, tune in to talk radio and how many times do they talk about nuts and wackos and kooks.

And the stigma of mental illness is still very, very primary, very, very prominent. I don't need to tell you guys about this as an issue. What we need to do is make some sense of what stigma and discrimination is.

So one way to look at it in this slide is to look at in terms of public stigma, what we the public do to people with mental illness when we buy into the stigma.

One of the big things we do is we rob people of their rightful opportunities in their community, rightful opportunities to work, living on their own, for having good health care.

And the reason we do that is because we buy into the stereotypes of mental illness. Stereotypes are beliefs about certain groups that are unavoidable. We all know the stereotypes about, for example, Irish Americans: stereotypes about Irish Americans is that they're all drunks. When you get to a certain age in our culture you just learn that stereotype.

Prejudice is agreeing with a stereotype: yep, all people with mental illness are drunks, and discrimination is acting on it, and therefore I'm not going to let my daughter marry a drunken Irishman.

One of the stereotypes of mental illness—there tend to be two big ones—one is that people with mental illness are to blame for the mental illness, they have no backbone.

And the second is that people with mental illness are dangerous. So if we—I'm just learning about the red letters as they show up. If we believe, if we agree with the stigma, if we agree that people with mental illness are all dangerous, then we tend to endorse the prejudice and we tend to end up discriminating against them.

Another way of looking at this public stigma is the whole issue of social inclusion, namely, people with mental illness should be provided to their community in their world, um, shouldn't be excluded, away at institutions of one sort or another. Self-stigma is the second kind of stigma, mainly what people with mental illness do to themselves when they internalize the stigma.

And this has impacts in one of four different ways. First is, are you even aware of the stereotype of mental illness? So the question for a person with mental illness is do they realize that the public thinks individuals with mental illness are dangerous.

That's aware of the stereotype. Agree with it is yep, I'm aware they're dangerous and I think that's true, I do think people with mental illness are dangerous.

Apply it to self is not only do I think people with mental illness are dangerous, I'm mentally ill, so I must be dangerous, which can lead to significant results, shame being one.

But the significant loss of self-esteem, I'm not worthy of a good job because I could be dangerous. Low self-efficacy: I couldn't handle a good job because I'd be dangerous, but why try, why should I try to get a job, somebody like me would never be able to be successful.

And what's the flip side of self-stigma? The idea of dignity and empowerment. That these internalized stigmas should be replaced with thoughts of support one's sense of self-respect and make sure they have power over their lives.

Now one thing we need to be explicit about is just because there's an idea of self-stigma by no means suggests that all people with mental illness internalize a stigma.

A lot of people actually react negatively to the stigma and as righteous citizens respond to it with some approach of empowerment and anti-stigma approach.

What can we do about stigma? Well, protest, education, and contact are the three ways of looking at this. I would suggest all the different ways of changing public stigma to be organized in these three areas.

Protest is a appeal to a moral authority, shame on us for representing people with mental illness in this disrespectful way, and urging for people to stop thinking that way.

Attitudes: protest actually makes attitudes worse. The idea being don't tell me what to think, behavior when we talk about protest and economic power are going to have quite positive effects.

Education, this contrasting the myths of mental illness versus the facts, for example the myth about mental illness is that people with mental illness choose to be mentally ill.

The fact about mental illness is that mental illness typically is a biological illness that people are born with and it can have significant impact on their lives.

The third issue is contact, that by meeting people with mental illness, attitudes about mental illness can change directly.

So more about contact, one-time contact is good, multiple contacts is better. Meeting a person with mental illness one time at a Rotary meeting will change my attitudes but seeing them several times at a Rotary meeting will have big changes, permanent changes.

What this has implications for is perhaps one of the biggest ways to change stigma is for people to come out of the closet. And of course we can't be simple about this and assume people who come out of the closet without being aware of the cost and benefits, but that may be a broad-scale way of changing the stigma of mental illness.

And when we talk about contact, who should we talk about contact with? Does it help that people like Rod Steiger and Margot Kidder and Patty Duke and Mike Wallace have all come out with serious mental illness—in their case, major depression? Does it help when a famous person comes out, or does it help when a neighbor or our coworker or the person we sit next to in church comes out with a mental illness?

The research suggests famous people coming out is okay, but to see people in their community coming out has a huge impact on changing stigma.

The third way of looking at stigma is what we would call label avoidance. Many people, not people yet into the mental health system, many, many people don't want to be labeled mentally ill.

One of the quickest ways to get this label is to be associated with or get services from psychiatrists or psychologists or the like. The idea of the public knowing that you have a medication or that you're obtaining support.

What the research suggests is in order to avoid the stigma, many people choose not to seek or find care. Sometimes the professionals are guilty of calling this poor insight. What it is is people who don't want to seek care because of the power of the stigma.

Alternatively a lot of people seek care but choose not to participate in it, in particular drop out of care. Some people call this adherence, which in its own self has huge baggage, and instead we look at it is because stigma some people choose to drop out, choose not to continue being known as a person with mental illness. And the secret here is the idea of empowerment.

Changing public stigma—we've talked about that before, in doing contact, we're talking about doing contact with real people.

Bob Lundin is a friend and colleague of mine that has done these anti-stigma projects. He is a person with schizoaffective disorder and they've been shown to have huge impacts in changing people's minds.

What can we do about self-stigma? One thing we know is that people who identify broadly with people with mental illness, who receive peer support like consumer-operated services tend to knock down their self-stigma greatly.

So people with mental illness who decide to participate in mutual health programs or drop-in centers can feel less stigmatized as a result.

In terms of empowerment, the more people have total decisionmaking and self-determination, the less likely they are to feel self-stigmatized, the more likely they are to get supported services, services where a job coach or a housing facilitator works for us, the more likely self-stigma is to go away.

And what can we do about label avoidance? We can look at education in terms of challenging the stereotypes. We can look at easy and more private access.

There are now different groups across the country who are using one-way websites where people from a certain group of people, people who belong to a certain HMO for example can go to this website, learn about mental illness and how the stigma's not true, and address issues like mental health literacy, about what's mental illness, what are mental health services?

What's out there? Before we talk about how to change it, we need to realize there's a lot of thought about this already. The World Psychiatric Association is in its 10th year of an anti-stigma program with the web address there.

They are currently in more than 50 countries and over that time have had a lot of thought about how to change stigma. Equally the group that's sponsoring this webinar, SAMHSA's Resource Center to Promote Acceptance, Dignity and Social Inclusion.

Promoteacceptance.samhsa.gov currently has listed, is a repository of anti-stigma approaches that they list by several States, by each State so you can go to a map of the United States and click on the State for which you want to see specific services there.

And actually the Canadian government, I was in Calgary last week and the Canadian government is starting up a 10-year anti-stigma and discrimination campaign.

As you can see this campaign will be the largest systematic effort to reduce the stigma of mental illness in Canadian history. There are a lot of groups who are already doing a lot of thought about changing stigma.

Getting more specific examples, this is a book that Bob Lundin and I wrote called Don't Call Me Nuts! In this book we list different ways of coping with stigma of mental illness and trying to change it.

This is available on Amazon. But there's also a range of other approaches that have been used. This is a public service announcement that the Ad Council recently did, I believe with support from SAMHSA.

The website for the Ad Council is there but the gist of the public service announcement is "what a difference a friend makes." In this situation the young man on the right is playing a video game with the young man on the left and in the course of this video game the gentleman on the right talks about how he's very upset and distressed and overwhelmed by things.

And the voiceover message to the guy on the left is "What should you do when your friend's upset? Be there." A very useful, just a wonderful message to people because it tells people what do to when they have family members and the like that are having a tough time.

In addition to public service announcements there's several media watch campaigns, in this case StigmaBusters from the National Alliance of Mental Illness with their website.

You can actually go to the website and sign on. This is a group of thousands of advocates who regularly receive alerts from NAMI about disrespectful images.

In this case you see an advertisement from a amusement park, Fright Night Scream Park, showing an asylum and a person with mental illness in straitjackets.

This kind of image itself unfortunately is not very rare at all. We frequently see this kind of thing as a way of making fun of a person with mental illness.

So the media watch is another approach that's already out there, and the third one as I said contact is very important, personal stories where people with mental illness talk about their way down, the difficulties brought up by their challenges.

The way up story, how they are able to recover, and their current day, how with a sense of self-determination and support, they are able to make and address the goals of interest to them.

The Canadian government also said for example there's a temptation for the commission to spring out of the gate with a mass marketing campaign, but that would be a mistake.

The idea is and the concern is, is that a lot of anti-stigma approaches may be ahead of the data. That we may think something works but we just don't know in fact if it does. However we do not have the luxury at all of saying let's wait for the data, let's wait for 2 or 3 years to use a randomized design to determine whether specific intervention works.

We need to intervene now. And so my goal is to share with you some ideas about how to measure out different approaches in terms of public stigma and self-stigma in particular.

We've written a book called Beat the Stigma and Discrimination! Four Lessons for Mental Health Advocates. The ADS Center has a copy of this, which I'll make available to you for free over the Internet.

It is about an 80-page monograph that specifically addresses the idea of how to change the stigma of mental illness.

What I want to do though is address specific issues in that. How do we address and change public stigma? We want an evidence-based approach, and by evidence-based approach we mean two things.

One is we want to collect rigorous information to determine whether or not a specific anti-stigma project works. And what we would do is a carefully crafted study where we'd randomize the general public to one of two conditions and measure stigma's impact.

Again, the problem with that is that's the very costly kind of study to do, it's costly in terms of the research staff you needed, it's costly in terms of the amount of time needed.

And instead what we need to look at is a more user-friendly kind of approach that for example many of you on the phone can do when we're done.

In trying to address public stigma we need to understand who is to be targeted by this stigma, so while there are some definite benefits of public service campaigns, we are seeking to get a broader perspective in the population about the stigma of mental illness.

What is equally important if perhaps not more so is the idea of targeting stigma. So what I want to do is work with employers who are not hiring people with mental illness but they buy into the prejudice.

Or the landlords who are not renting to them, or healthcare providers who aren't providing the entitled breadth of health services they might have.

One of the nice things about who to address is what needs to be changed, what particular aspect of stigma related to each targeted group. So again knowing we're going to target employers says to me what I want to change.

I want to change so they work. Knowing I'm targeting the landlords what I want to change is that they hire people with mental illness more. I want to work with community colleges and what I want them to change is so that they're more likely to bring in people with mental illness into their college.

The question is how will this be changed? This again is what the ADS Center has on its website, a list of specific change strategies. When will we specifically do it and again our focus today is how do we measure it?

Measuring it, Jon who's going to come after me is going to talk about what the essential aspect of measuring stigma approaches. Fundamentally that's a participatory action research that any kind of anti-stigma program, any kind of measuring of the approach will be totally unsuccessful if we don't have key stakeholder groups as part of the effort.

Clearly essential to the key stakeholder groups are consumers, but in addition to that we may want to include family members or providers or media people, a vast array of groups that we want their feedback. And what we should particularly be targeting in trying to change stigma.

So how do we change public stigma? The ADS Center has for you to use the Attribution Questionnaire-27. It's a measure that our group has come up with and it has quite extensive literature on it; if anybody's interested in the research you can email me and we can talk to you about it.

In doing the Attribution Questionnaire what you do is you provide the questionnaire to people, have them read the vignette: Harry's a 30-year-old single man with schizophrenia. Sometimes he hears voices and becomes upset. He lives alone in an apartment and works as a clerk at a large law firm. He's been hospitalized six times because of his illness.

They then ask 27 items about this, how dangerous would you feel Harry is, do you believe he's to blame for his illness, would you be angry with him, 27 different items have been shown to represent key aspects of public stigma, what we call the sub-factors. So there are nine sub-factors there. These 27 items would shorten into the nine factors so three items would be for each factor.

These factors are as first blame, the belief that people with mental illness are to blame or responsible for their illness. Second anger, that I am angry with them because they tend to get away with this kind of being to blame.

Third is pity, the idea that people with mental illness are pitiable. Next is help, that I'm not going to help a person with mental illness because they're not deserving. The next are the big three in terms of stigma of mental illness, namely, I believe they're all dangerous, and therefore I fear them, and therefore I'm going to avoid them.

And finally because they're dangerous and I fear them I'm going to be coercive, I'm going to tell them they have to take their meds whether they want to or not or I'm going to institutionalize which in the old days was State hospitals and now tend to be these mental health ghettos in different parts of cities.

So you can take those 27 items, people will fill out all 27 items and then you add up the scores, three items for blame, three items for anger, right on down the line. I mean if you obtain a copy of the measure it spells out much more specifically.

Let's try to do some research with this. In particular I'm interested in an anti-stigma program, one in which a person with mental illness comes out and tells his story, and I want to see what kind of impact it has on 30 people, on 30 college students.

So if you can see in the right box in green is the word pre, in yellow post, and in red follow-up. Pre-testing is administering the Attribution Questionnaire just before the gentleman comes in and tells his story.

So it's pre, or ahead of, the anti-stigma approach. Post-testing is after the person comes in to tell his story, right away they take the same Attribution Questionnaire again. Follow-up is, well I wonder if this works over time?

Follow-up is coming back and asking the question 1 week later. So pre is immediately before they complete the 27-item measure, post is immediately after, again they would—they complete the same items. And follow-up can be as long as you want. In this case it's 1 week.

I don't want to be too complicated about things so I want to just look at four items from the Attribution Questionnaire to see how it works out: blame, danger, pity, and help. Blame, again, is the idea that people with mental illness are responsible for their mental illness. Danger is that they are threatening.

Pity is that they're pitiful and help is whether to help them or not. What I do is take the 30 college students, take their scores for blame, figure out the average. I figure out the average for pre, which in this case looks about 22.

I figure out for post and that blame in this case looks like 12 and then I figure out for follow-up; in this case it looks like blame is about 11.

Looking at the data, what can I tell? It looks to me like there's a huge reduction from pre to post on blame. So as a result of that people are less likely to blame others for their mental illness.

Dangerousness you see a big reduction at—from pre to post, from about 20 to 13, which leads to an even bigger reduction in dangerous further down. Pity, we see a big reduction from pre to post which stays down at follow-up.

Now help, you see a big increase from pre to post. You have to remember that the way help works is the higher it is the more you want to help people. So in this case it looks like they're helping people more often.

There are other issues to consider in terms of evidence-based considerations but looking at where we're at with time, let me go by that and talk finally about our measure of self-stigma.

We have an instrument called a Self-Stigma and Mental Illness Scale. This too has a fair amount of data documenting its success. In doing this scale people complete a series of questions about general—about people with mental illness, most persons with mental illness cannot be trusted.

Most persons with mental illness are unable to keep—or get or keep a job and how much you agree with that on a nine-point scale, where nine is very much and one is not at all.

There are 40 items. 10 of the items represent awareness. "I think public believes most persons with mental illness cannot be trusted." Agreement is "I agree that people with mental illness cannot be trusted."

Apply is "I have a mental illness and so I can't be trusted," and aware is when I beat myself up when I believe I have a mental illness.

I mean, same thing as the Attribution Questionnaire, same situation where we figure out the pre scores for each of the factors—awareness, agreement, application, and beating oneself up—and we figure out the post-test scores in the same fashion in the follow-up.

In terms of awareness we don't really see a lot of change from pre around 82 to post around 78, the follow-up around 75.

We do see a pretty big change in agreement, in applying, and in beating up. You notice I said it looks like a big change, and that look may be enough to tell whether or not your changes are effective.

But ultimately that's what statistics do, and so if you wanted to go a step further, you could partner with somebody to do some statistical analyses to find out these differences are in fact good enough.

But again just looking at the scale many times can tell you. How to get help, how to pursue this and move ahead about it—the Resource Center to Promote Acceptance, Dignity and Social Inclusion is already a superb resource for getting some sense of what're existing programs out there.

And in the process they could be in a position to set up manual development, so a lot of these good ideas can actually be translated into manuals with which other people can learn.

And the idea of providing training on a specific intervention fidelity, I mean some ideas of strategies for which local stigma change programs can be supported.

In addition I think some resource center, should it be the Resource Center to Promote Acceptance, Dignity and Social Inclusion or another center, should look at trying to make sense of evaluation approaches and provide technical support to people who want to get some better, say, a better idea of how to address, how to measure the change of stigma.

Our website, Chicago Consortium for Stigma Research, I'm at www.stigmaresearch.org. We've been funded by National Institute of Mental Health as one of the few research centers in the country specifically looking at the stigma of mental illness.2

And with that I'll turn this over to Jonathan and look forward to your questions.

Holly Reynolds Lee: Well thanks, Pat; this is Holly. Before Jon starts let me just say to all of our participants that we here at the ADS Center want to collect best practices for evaluating the types of programs we're discussing today, so we can share these experiences with other people who are interested in the work of evaluation.

So we invite you to send us a description of what you've done or a summary of what you're planning for the future.

You'll find the ADS Center contact information on our website and also in the slides for today.

So Jon, I'm turning it over to you. Jon, are you there? Hang on, folks; let me see if I can figure out what's going on. Jon if you can hear me, try pressing star 6 on your keypad.

Well folks, while we wait for Jon to rejoin our call let's go ahead and move into Q and A and take some of your calls. So if you have a question let's go ahead and press star 1 on your keypad and we'll be happy to take it.

Coordinator: One moment.

Holly Reynolds Lee: Thank you.

Coordinator: Tanya your line is open.

Tanya Zumach: Hi this is Tanya Zumach from Metropolitan Group. My question is the, Pat, thanks very much for your presentation. The AQ study that you shared with us is based on measuring the impact of personal contact.

And as we know, you know, personal contact is really ideal but it's really hard to implement sort of on a large scale, you know, way, so that we have a broader impact in society.

So I'm wondering if that has also been used in other types of campaigns like advertising or other less involved interactions with folks.

Pat Corrigan: Good question. The graphs I showed you were constructed to provide an example for you. This is not our actual research data—again you can contact me for that. That was shown as a personal contact kind of intervention.

We've also used it for education, looking at the myths of mental illness versus the facts and the Attribution Questionnaire is sensitive to that too.

Jon Delman: Hello, can you hear me?

Holly Reynolds Lee: Jon, yes we can, welcome back.

Jon Delman: I apologize.

Holly Reynolds Lee: That's okay.

Jon Delman: Should I get started?

Holly Reynolds Lee: Let's—maybe we can finish answering this question and maybe a couple—one or two more and then we'll move into your portion if that's okay.

Jon Delman: Beautiful.

Pat Corrigan: Yep, I answered that question.

Holly Reynolds Lee: Excellent, do we have another question operator?

Coordinator: Yes, (Carrie), your line is open.

(Carrie Horn): Hi, this is (Carrie Horn) from Maine and I have two questions. One is, you know it's really, it was really inspiring for me to learn about evaluations and so I was wondering if the questionnaires that you used, Dr. Corrigan, that you mentioned on the call today, if they're available for use for us to evaluate our programs, or if we should call the ADS Center for that? And then my other question is, are the slides in today's presentation available to us too?

Pat Corrigan: So the measures are for you to use. I believe the ADS Center is going to come up with some mechanism to get those to you.

(Carrie Horn): Okay great.

Pat Corrigan: Is that right Holly?

Holly Reynolds Lee: We will. We will post the slides on our website as well as the playback information so that if anybody missed any portion of today's call you can listen to the call, get all of the slides, as well as that information will be posted on our website, hopefully by the end of this week, if not, then certainly next week.

Pat Corrigan: And is the ADS Center or am I going to actually pass on copies of the tests in there?

Holly Reynolds Lee: The ADS Center has copies of all of the materials that you gave to us so people who want that information, just contact the ADS Center by phone or by email.

You can request the survey info or the other materials and we'll send that out to you.

Pat Corrigan: And so that represents two measures, again, one is the Attribution Questionnaire-27, that's 27 items, we do have a nine-item version, which is obviously a lot shorter and some evidence that it works.

So email me, I can pass that on to you. And we also have a measure called the Recovery Scale, which is looking obviously not at the bad things of stigma but the positive things as a result of inclusion.

I don't believe the ADS Center has that but email me, I can pass that on to you also. My email by the way is corrigan@iit.edu.

(Carrie Horn): Excellent, thank you.

Holly Reynolds Lee: Great, and at this point let's go ahead and move into Jon Delman's presentation and folks we will take more questions at the end of his portion of the presentation.

Jon Delman: Thank you Holly. Okay, my name is Jonathan Delman, on to the next slide, and I'm the Executive Director of a consumer-run research and evaluation organization.

And I have bipolar illness myself so in one sense we're an example of the contact method of trying to challenge stigma—specifically that people can handle work responsibly.

But that's not our primary mission, and our primary mission is to create opportunities for our consumers and family members to be involved in evaluation and research, and we aim to really make change.

We're not interested in publications per se but in making sure that research and evaluation information is used for practice and policy.

So the first thing I would like to address is how organizations can work with or develop working relationships with academia or other research organizations, because they have a lot of expertise obviously and tend to be useful.

And I've done this quite a bit with some really great success and other times it is not working out so well. So first of all, believe that you're important.

You have great information; you have great potential data to collect. And that's your—that's what academics are often interested in is data that can be used to demonstrate some sort of theory or that something works.

So peer support, anti-stigma, those are things that there's a fair amount of interest in, so you have a bargaining chip. For example, I tried to make contact with a research institution several years ago and to work with us and they never responded to me.

And at one point they contacted me about being a key informant for them in an evaluation. And I said, well wait a second, you haven't gotten back to me yet. So they agreed to meet with us and in exchange that I would be interviewed by them.

And since then we've had several really great projects together. So you can put it any way you want, but it's important to find your way into that door and really show academia that you're a real person and that you have interests and passion.

Another key thing is I wouldn't work with any institution of academia because they're available per se. Do your background research yourself. Look—check on the web and find out which institutions are individual professors.

That's often where they key is made, what they are interested in. And in particular, look at institutions that have an interest in policy, because what we're talking here ultimately is about policy change.

A lot of straight researchers don't necessarily want to work with the community, but people who are interested in policy are more likely to. So, you know, oftentimes universities will—an institution will have the word policy in its title.

I think that's a good sign. And then you need to develop those relationships. So a professor joined my board of directors. She's no longer on it anymore but we've gotten an NIMH grant together and developed a really great working relationship.

And I sat on boards for some of these other institutions so that's a great way to kind of get to know people. And for an organization—I mean we're a small nonprofit like many of you out there. We don't have hard money per se.

So over time we've developed relationships with our Department of Mental Health, which has some sway in certain instances, and other such organizations, so it's good to have allies when you're really trying to get academia to work with you.

Here's the tricky part really. So in any sort of collaboration there can be a lot of confusion as I'm sure you all know. And in my experience you want to be up front as to who is responsible for what, because if that doesn't happen, when things get busy on the academic side, it may be easy to say, well we didn't realize we were supposed to do this, we put our resources elsewhere.

So—and that could go on the community side too. But this probably goes without saying, specific personnel, I'm aware of an instance from a few years ago where an organization contracted with a university to do an evaluation and then the university assigned someone who was really green and didn't know much about the substance.

So that didn't work out too well. I would say take names and, you know, develop those personal relationships. Maintain contact, this is about communication during the evaluation and methods of accountability, that's really a tough one, but maybe it's a signed contract that's in essence a method of accountability.

How will reports be used? So there may be differences of opinion in the interpretation of the data. So what happens when that happens?

And it may be that academia really wants to own the data so they can publish without being concerned with what the community agency has to say. So those are the things that are really tricky.

Ideally you collaborate with academia and people if it ends up there are articles you can write them together. And oftentimes we're producing policy reports to demonstrate that an evaluation produced something and we recommend this.

And I think you want to make sure that you have an understanding with academia as to how those reports come out. And they'd be better coming out from academia because they would be seen perhaps as more objective.

On the other hand if it comes out from the community organization you have a little more control, so something to keep in mind—I lost—there we go, okay. All right.

Oh, yeah, get to know people, have a coffee with them. As I said, demonstrate that you're a real person. Cross-training is something that is done in community-based participatory action research, which we do a lot of.

Essentially what that means is you—the researchers do a little training for you on what evaluation is and you do some training for them on things they don't know about, things that our consumer organizations have trained on is (strengths-based care), use of language, and ultimately people found it interesting on both sides, and I think it demonstrated that again you know as Patrick said, contact is really an effective way of fighting stigma, which is kind of integral to developing relationships with academia.

It is fighting that notion that because a group is—has a mental health condition or a family member is not able to do something, they may have a little less education, we I should say—we may have a little less education but we're capable.

Debrief after collaborative events, celebrate achievements. Okay. All right, so let's say that you don't have a lot of academic collaboration or just a tad, and you have to think about how to do this evaluation on a low cost.

And I just put out here the kinds of quote unquote outcomes that people tend to measure and actually Patrick alluded to the post, and, let's see—and the continuing measure.

Well the direct post is a short-term outcome. What happens right after the training? Or as people are leaving the training, can you get—have them take one of the measures that Pat talked about, whether it be the Attribution scale or the Self-Stigma?

Long-term as he noted is if you were to give it to them a month later or much later. So that's—he made that point, I just wanted to reinforce it and output is really how many people are at a training for example.

Satisfaction is also a kind of outcome, I'm not sure that's what you're going to be looking at in terms of demonstrating outcomes. Satisfaction really is important in terms of demonstrating the feasibility of an intervention, whether it be a speakers' bureau or the use of video to show what people can do.

And the idea, and this is often missed is, even though something may be an evidence-based practice, if people don't show up for that practice, it really is—I'm sure it's evidence based but it's a limited amount of impact it's going to have and it ultimately makes it cost ineffective.

So satisfaction does—can be useful in terms of particularly improving how you present something.

All right, here's a big issue. Evaluation costs, now these are some of the costs that people probably don't think about when you're developing any kind of evaluation.

I don't know, maybe people do but when people go into something you may think that if you're giving a pre-test and a post-test that you know it's going to be not so hard and not take a lot of time.

But it ultimately does take more time than people would consider possible. And that's a percentage of someone's or a group of people's strategy, a group of people's salaries.

One thing that really takes a lot of time believe it or not is entering and checking data. You need to set up a database and you need to have someone who is good at this; not everyone is good at entering data.

It can be—it can feel monotonous. I actually like to do it once in a while because it's a break from a lot of the other interesting work that I do.

But a lot of people are unable to enter data very well and as part of that you're going to need someone to check, double check, at least it could be a spot check, one every three questions or one every five questions to make sure that the data is being entered correctly.

So that takes a little bit of time. There's also, you're going to need someone in the organization who is overseeing the project from the development of the objectives to the development of the instruments to the data collection to the placement of the data.

And to the analysis and the report, I mean and it might be multiple people involved. So that person is taking on a—at least for a period of time a significant task.

And it's going to take away from other work that they might do or they may have to work overtime. Another cost is consultants, which I think is often a very good investment.

Training for data collection statistics can really pay off in the long run. Printing may not be a big deal but if you want your reports to look nice it can cost a few bucks.

And I talked about data collection, this doesn't happen too often these days and Patrick has made his surveys available, which is really great. But at times there are commercial institutions that require you to purchase surveys.

I've never done it but that can happen and I don't know how much they change. At times what people will do that I've worked with, other researchers is I'll ask permission to use their survey.

And they'll say yes and they might qualify it by saying well you know, I'd really like to see the data you collect because I'm interested in seeing how this instrument works in a variety of places and to test its validity and reliability.

And I typically don't have a problem with that. I mean I think it's a nice trade-off. It is with—hold on a second, yeah, with quantitative data, you're going to really benefit from statistical software such as (SPSS).

You can get a student version for under $100 and it's really easy, pretty easy to use so I don't—I have no connection with (SPSS), I've just used it for 8 or 9 years and otherwise don't have a lot of experience with statistical software.

Travel, staff may have to go to a particular site to give the survey out. You know travel is expensive today, a little less expensive than it was a month ago, but you know it will be expensive again probably.

So—and then there's the combinations if you have to travel far and particularly large States and if you're doing trainings or interventions all over the State, opportunity caused by—I mentioned before, there are things that aren't going to happen because you're doing this evaluation probably.

And meaning that for example if you weren't doing this evaluation you might devote that time to conducting a training. So that's something to take into account when you assess the approximate cost of doing this sort of evaluation.

Now I'm not going to go over this too much because ultimately I think it's a call you want to make based on some consultation with colleagues, people who have done this.

But it really is a cost-benefit analysis. Is it worth it? And part of it is what do you hope to get out of the evaluation?

If you're in an environment where programs are being cut, then you may feel more the need to demonstrate some sort of outcome from something and so that you're ready to present it to whoever's making decisions.

Hopefully they are good outcomes, but you know it's worth thinking about that because not a lot of people are doing it.

Yeah, and then what is the likelihood of completing that evaluation? I think if you plan it right you can get close to 90 percent, but I always think this is worth considering and doing.

So here are the keys—I'm being repetitive here, but keep things simple, don't be ambitious if you really want to keep it low cost, particularly if it's your first one.

You may—after that you may think, hmm, you know maybe I could do this a little bit more and I could bring in academia. As I said consider an early investment and evaluate training and consulting.

And consider ongoing technical assistance. So there may be organizations that are willing to do this. I have found them without a price because they are interested in perhaps developing a working relationship with you.

And as I said earlier, you have some leverage perhaps. And I'm going back to kind of Pat's discussion about pre- and post-evaluations because that's going to be the focus of really the remainder of my talk.

Because you can with a—if you take a measure prior to something, take an instrument and use the same instrument to measure something after that, you would—may be able to detect some sort of change in something.

And sometimes that will be statistically significant, other times it won't, but with the statistical software you can determine if something's statistically significant.

And if you can say that then that somehow means something to a lot of other people and—as it should. So this is something that is quite doable. I wouldn't have a comparison group; it's too complicated for any sort of evaluation and you won't be able to prove causation.

I wouldn't go around saying this changed because of this, although there are ways you can narrow that gap. But you can demonstrate there is an association between your training say or your presentations and a change in say attitudes about stigma.

And one thing I found you can do, so when I say and here I'm referring to the number of people who take a training and you may if you do that over a period of time and maybe do it five or six times, you can really raise the number of people that you're doing this for, the idea is to keep your training or presentation consistent.

So you're measuring the same thing, so that's something that hopefully you can do. Now I have a particular, this is my view. And again I like to keep things simple.

And believe it or not, open-ended questions can be a real burden in terms of time in analysis. If anybody has done qualitative situation which—and we do quite a bit of it and we love it and it takes a lot of time and we really up the budget for it.

So I think it's okay to have a few questions but you know like Patrick's instruments, you really want to have them more quantitative. Those are actually easier to analyze.

I would have less than five response options per item, so that means one, two, three, four, five, that would be the choice.

It could be longer but I think the analysis would be more complicated and I don't think more than five is always that necessary.

Now here again referring to what Pat said, locate existing instruments. Here you go, you have you know from what I can tell some really good instruments that he's presenting, his anti-stigma, I'm sorry, Self-Stigma and the Attribution scale.

And you know consult with people familiar with the topic, that's Pat, and probably some other people. Review the literature and I Google quite a bit initially to find out what's out there. It really is useful. I was doing an evaluation of a peer specialist training and I was able to come across something that researcher Judith Cook had done.

I was in touch with her and she said sure, you know, take what you want. So that made my task a lot easier. Oops, sorry. All right. So I personally think that it's okay to adapt an instrument to meet your needs and objectives.

Now I wouldn't fiddle with something that's really good, so when Pat talks about the Self-Stigma Scale, I do like the nine-item version. I'm not sure I'd go with a 40-item version because you know people get petered out when they're doing things in an instant.

They are less likely to complete it, etcetera, and I think those are really good for research. But again we're just a nine-item scale to be perfectly fine.

We've used—I'm sorry, 15-item scales which I think, but beyond that you might lose somebody. As I said I would keep it short. I mean we're not trying, for my purpose in terms of evaluation, we're not trying to demonstrate something is something new and—we're not trying to discover something.

But that it produces outcomes. And you know I think you can do that with fewer items.

All right, let's see if I can work my way through this. Let's see, so I wouldn't try to assess more than a few domains. So let's say you did a training or a presentation on stigma. I know there's a speakers' bureau out there. You could perhaps give the two scales, well certainly the second scale that Pat mentioned, the Self-Stigma one.

I take that back, that might not work for—if that was presented to consumers, then that would be really great. And then the other scale would be good for perhaps both consumer and non-consumers. But—so those might be your key domains.

I wouldn't go off and add three or four more of them, I think that increases the cost frankly. So I kind of—so training is primarily focused on increasing knowledge. Hopefully you get an effect on stigma.

But you're going to be able to I'd say really get your most valid assessment if you measure something that's clearly most directly related to the intervention.

And a training is typically to initially change knowledge, also attitudes, you might want to include that, but it's hard to tell if a training is actually—and that can include assessments of both self-stigma and attribution.

So that—the knowledge it obviously depends on what your goal is for the training. Participant self-report I think that's pretty much what we're talking about here.

And yeah, I talked about number four. Okay. You know the best thing to do is just give people the post right after the events.

Take my word for it: If you need to follow up with people you're going to lose some people. It costs some money to really follow up either by mail, by phone, a lot of people don't have phones or they just use cell phones.

I think in terms of a real good low-cost evaluation start off simple, get the measure right afterwards. And from there you may be able to think bigger and as I said you don't want to lose a lot of people to follow-up because that's going to really bias your assessment.

Because oftentimes the people who don't respond are the people who maybe didn't have the best outcome. I—the web I think is really great and I think more and more people are learning to use it and particularly if you have an ongoing relationship.

And let's say you're developing a peer specialist training and you're going to continue to be in touch with the peer specialists, it may be easier to have them just type in a code on the web and put down their responses.

All right, so I'm quickly going to use an example here of something where I had some success but some lack of success too. But we have a certified peer specialist training program in Massachusetts.

And we were given a little bit of money to evaluate that training. It's a 5—it has been a 5-day intensive training. People go home on the weekend and it's 3 days, additional days of training.

So it's pretty intensive and you know a month later people take an exam to become certified. So that's sort of the backdrop. As I mentioned before we reached out to Judith Cook.

Her instrument was about 80-something questions. We worked with the consumer-run organization The Transformation Center that was here doing the training and boiled it down to 15 knowledge items that were specifically addressed in the training.

In the pre-training we gave right before, I'm sorry the pre-evaluation right before the training, the post we decided to do to see if the effect would last and give it 4 weeks thereafter.

And we did that, we mailed the surveys to people or we gave them—yeah we mailed the surveys to people. So how did that result?

We did, were able to make some findings that knowledge did increase in a significant way, but we had a low return rate, 60 percent, and that's because you know we didn't have incentives for people, we didn't say you'd make $5, $10, we just essentially asked them to do it.

And there's a lot of reasons people don't return things. So that probably affected the validity of our assessment.

So in terms of statistics this is really, you know, do it for each time you provide a training or a talk and then try to do it, because you do it a lot of times do it overall and see what comes up.

You can use basic chi square, statistical significance testing which I think is teachable, I mean with the use of something like (SPSS). So we have someone here at my organization who really knows that and does it and it doesn't take her long.

And it can be really interesting to see. The other thing that's worth doing at times is a question-by-question analysis. So in our evaluation of the peer specialist training program, we were able to see question by question whether there was like some level of improvement, no improvement, or decreased knowledge even.

And for example people were able to get a question about recovery right maybe 60 percent of the time on the pre and 90 percent of the time on the post. So clearly the message is getting there.

And the other strength of this evaluation is that it was really compacted into a few days, so outside events were less likely to affect the outcome than if the tests were given a year later.

So yeah, you can—and we had a question in which things didn't change around psychiatric—people's knowledge of psychiatric rehabilitation. So the trainers knew they had to work on that presentation.

This was really useful. So briefly, basic training, the more training and I would consider web-based data collection if you're really working—like even with your own staff.

You know they're likely to cooperate. And that's my presentation and you have my contact information and I look forward to questions.

Holly Reynolds Lee: Great, well thank you so much Jon. This does conclude the presentation portion of the call. At this point we're going to open it up to questions from the participants.

As a reminder, press star 1 to enter into the queue. We can take the first question.

Coordinator: (Bud), your line is open.

(Bud): What do you mean my line is open. Oh, question?

Coordinator: Yes please.

(Bud): Oh I just wondered what we actually do with this. It seems like it might be expensive to run a survey.

Jon Delman: No, I don't think so if you do a pre, post, and you do the pre right before the event and the post right after.

Pat Corrigan: Yeah, somebody could take the Attribution Questionnaire, give it out to 30 people, you go in a room and want a person you know to tell his story, 30 people in the room, give them out the measure before, give them out the measure after.

So you have the averages and look on a bar graph where there's a change. What do you do with it? I think what you need to do is ask yourself that if you're doing a particular intervention and find no change that you need to significantly tweak that intervention to have an impact.

I think the other thing you need to look at is there's nine different factors here including dangerousness and blame and the like.

And if you're particularly interested in challenging the stigma related to dangerousness but find no difference in the Attribution Questionnaire, then again you need to tweak that intervention, that anti-stigma approach to have—to develop the desired effects.

(Bud): Okay thank you.

Coordinator: (Carrie), your line is open.

(Carrie): Hi, I have two questions. One is about the public service announcement, the "What a Difference a Friend Makes." Our community has tried to use the PSAs and we realize that or we have been told by our cable company that they expire on November 14.

And so I guess I was wondering if from Ad Council's point of view if that means that they're not going to be usable after that point or if they are going to extend the usability of those.

And the second question I have is more about evaluation. And it seems like what you're talking about is evaluating individual events versus mass campaigns. And I was just wondering if I could get the speakers' comments on whether or not we should be looking at entire initiatives, like over a year long, or if we should be looking at people's responses and perspectives after individuals of that entire initiative? Thanks.

Pat Corrigan: So maybe Holly should answer the first question about the PSA and then Jon and I could take a stab at the second question.

Holly Reynolds Lee: Sure, I can share that the PSAs are in the process of being renewed. If you want to contact the ADS Center directly we're happy to give you some more information as we get it and as it becomes available.

So that's the status for now and feel free to contact us again.

(Carrie): Okay great. Thanks.

Pat Corrigan: So as I heard the second question if I heard it correctly is "Should evaluation be done globally at a PSA level or more narrowly or locally in terms of targeting specific groups?"

(Carrie): Yes.

Pat Corrigan: I think the best stigma program is going to try to combine those, that investing just in PSAs are going to miss targeted groups and investing just in targeted groups can miss opportunities to get the population.

Doing targeted groups from an evaluation standpoint is a lot easier, because all you need to do is collect data from the 30, 40, 50 people in the room in front of you.

Collecting PSA information, people who research this area tend to distinguish between penetration and impact.

Penetration might be simply representative of how many people remember seeing the PSA so the impact was given, you've seen what kind of changes does it have on your attitudes.

So looking at PSAs is a much broader sort of evaluation. I was looking more locally and narrowly would be using something like the AQ or something else.

By the way one other point is I thought you were going to say, you know the problem with the Attribution Questionnaire is it's just attitudes. It's just looking at whether you believe people are more or less dangerous.

Now the measure is sensitive to that, so I will answer that question but would it lacks are behaviors, and so what needs—also be innovative is to put your heads together on what kind of innovation you would be best.

Jon Delman: I concur with Pat. I guess in terms of a PSA it may be that you know you give a pre/post to a group of people who just look at it and maybe you can compare some PSAs.

That's really imperfect, but it may indicate that you have something to go with.

(Carrie): Okay, thank you.

Coordinator: (Maureen), your line is open. Please check your mute button. (Maureen)? Delphine, your line is open.

Delphine Brody: Yes hello, Delphine Brody with the California Network of Mental Health Clients. I wanted to comment first, it would be excellent if as Jon said please post the presentation handouts immediately on this call so those of us that came in late because we didn't have the software can still reference things that Pat Corrigan said as well as things that Jonathan said.

So that would be extremely helpful, they're not available currently at least online. In the past we've had those available to flip back through and we want to ask better questions.

Another question I have is well it seems as though the types of programs that would be best evaluated, that would be most easily evaluated through community of research on an affordable scale would be face-to-face events such as speakers' bureaus and other contact-style things, or possibly education that's done by consumer groups in the community.

I'm wondering whether there's a way that consumer groups can also evaluate the effectiveness, well, here's just an idea that I wanted to run by you both.

We did this somewhat already in California a few years ago. We asked focus groups of consumers what they thought about the big national anti-stigma campaigns that were out in brochures and PSAs.

And they gave their opinions on them very sharply. Most were not in favor of the status quo messages, but I just wanted to run it by you, what you think about that message if—and I mean is there a particular way in which that could be more effectively done, given that I mean it's quantitative feedback that we'd be gathering.

I mean it's a little hard to guess what responses people would have other than yes/no, which isn't very necessarily that informative. But maybe that is the best way to go given that it would provide quantitative data.

Anyway I'll leave that to you, Jonathan and Pat.

Jon Delman: This is Jon. I wouldn't—I mean I think that is an effective first step, doing a qualitative assessment, and maybe the next step is taking those results and trying to quantify them meaning into a measure if you're looking for quantitative information.

But this information in and of itself if you—the bottom line is a good qualitative analysis describes exactly how you arrived at your themes and your discussion or your results.

It also involves a level of triangulation so to speak, so you and another person or at least two or three people should be analyzing it and then coming together and seeing if they're coming up with the same things.

So if you were to bring that information to a decisionmaker or other person or even a journal, I would first have more than—have you or a number of people separately evaluating the information from perhaps each focus group and then ultimately the complete one, and then describing specifically how you came up with your method, what you did to come to those results. And that's valid.

Delphine Brody: Thank you. That's pretty much how we did it.

Pat Corrigan: A couple other thoughts, the first would be that actually having people in front of you is probably much of the way we represented stigma change because you have a classroom full of people and see if their attitudes change.

But another somewhat doable approach is we're working with a couple States who now have online anti-stigma approaches. Usually they are tied with mental health literacy or the National Mental Health Awareness campaign has a website.

And you could partner with the State website to put information right there on the website regarding stigma viewpoint. Even more so what you can do is collect data from websites, determine how many people are going to the websites are clicking through to different pages on the website.

For example they read stuff from page 1, do they click through to learn more about depression or do they click through to actually get an address for a person providing services?

And the second thing is what about judging public service announcements by getting consumers' impressions about it? I think it's useful data. I think you're in essence looking at satisfaction to public service announcement, you have some data that can report people's means and standard deviations of those kind of variables.

As you admit the harder the questions the more time consuming the analysis becomes. So one of the hard questions about this is you said that consumers are going to believe this measure had a so-so reaction to the public service announcement.

And the question is, compared to what? So one of the challenges would be are you comparing this to what other consumers would say? Are you comparing this to what the people developed these things say?

The more elegant the questions the tougher the analysis becomes.

Delphine Brody: We actually ask people to give pro and con arguments for why they liked or disliked those ads, and so we got some interesting answers in a qualitative way.

Pat Corrigan: You know what would be interesting from that is, and then make a list to get some consensus of what benefits and costs come up from the consumers. I think that would be really useful to the people developing these things because the consumers say you know you're not talking about work at all.

Then that would suggest future iterations of this need to include some sort of issues related to work.

Delphine Brody: Definitely. Thank you.

Coordinator: (Samaya), your line is open.

(Samaya): Yes, can you hear me? I'm in Washington, DC, I'm a registered nurse and I would like to present training to both consumers and non-consumers in the medical field.

I'd like to know if there are PowerPoint training modalities for presentations on stigma available.

Pat Corrigan: Again I would look to Holly and the ADS Center and what you all have available in that regard.

(Samaya): There are PowerPoint training modalities?

Holly Reynolds Lee: This is Holly can everybody—can you hear me okay, Pat?

Pat Corrigan: Yep.

Holly Reynolds Lee: Okay, it seems like we're having a little bit of trouble with some of the participants asking questions. When you say PowerPoint trainings can you expand on that a little bit?

(Samaya): During the presentation it was discussed that there's a lot of information that's already out there. And it's, you know, not good to repeat where you can Google and see what other people have done.

I am interested in presenting a training of local consumers and non-consumers in the medical field and was wondering if there were already, I guess, set PowerPoint training modalities that are already in place to obtain for the purpose of the public, of educating the public on stigmas (unintelligible).

Holly Reynolds Lee: This is Holly from the ADS Center. I think that we have a lot of different materials available and I think we can help point you in some right directions that would give you information that would be easy to compile.

I don't know off the top of my head if there's a set presentation on this topic, but I know that we have a lot of information and can certainly connect you to it to make it a little easier to pull all that together.

(Samaya): Okay, thank you.

Holly Reynolds Lee: Yeah, you're welcome. We look forward to hearing from you.

Coordinator: (Carcita), your line is open.

(Carcita): Hi, this is (Carcita), I work with the Association of Oregon Mental Health Programs and I enjoyed the presentations by both of you very much, very informative.

I am working on efforts to do a statewide project, several actually, with consumers to the State of Oregon. And one thing that's a big issue here is about the certified peer specialists and training.

And that's being developed by several consumer-run organizations and other people. So I'd be very interested in seeing what Jon was talking about as far as the evaluation he did and also that training, if that's possible to get.

Also, because of the project that I'm working on, I know it's huge, I'm wondering what kind of information I can get to do evaluation of it. What I'm doing is utilizing the organization I work with. We work with all 36 county mental health directors, which is a good outlet for reaching people.

Our organization has a teleconferencing capabilities unlimited so that's the avenue I'm going to use to pull people together in areas. So I'm just a little nervous on how I'm going to evaluate this or to proceed to see what is of value and helpful, which is my goal of course.

Jon Delman: This is Jon, I'm happy to share with your or anybody the instruments that we developed to assess knowledge and even the reports.

What's happening now in the State is we're no longer involved in the evaluation but one of the universities and it sort of has picked it up and is doing a very comprehensive evaluation and I can put you in touch with a person who's responsible for that.

But that's an example in a sense of a community organization developing a relationship with a university. And I can put you in touch with that person.

(Carcita): Okay, so I don't have my material in front of me, I had problems with my Internet as well so ... how do I contact you, is that in the presentation PowerPoint or information? How do I—

Jon Delman: Slide 67.

(Carcita): Huh?

Jon Delman: Slide 67.

(Carcita): Slide 67. What does that mean?

Jon Delman: (That) slide.

Holly Reynolds Lee: I'm sorry, that means slide 67; we are in the process of uploading the presentation files to the website. They should be there hopefully by the end of the day and if not certainly tomorrow.

If you aren't able to get it feel free to call the ADS Center; we can provide you with that information over the phone or by email.

(Carcita): Oh, okay.

Pat Corrigan: You know Jon did a wonderful job talking about—

Jon Delman: Thank you.

Pat Corrigan: What are some of the ideas of how we get academics to partner in this, and I think that would be one of the questions for Oregon.

One way are there are people that provide relatively limited feedback as part of our mandate. We would provide kind of limited mandate or limited information like you see here.

The question though is how to get more academics, how to get somebody in Oregon to participate with you. One strategy, generally people who are relatively newer in their career are more interested in partnering with people to provide opportunities to get data to have some impact on the field.

Another option is to contact students. I have a student for example here in Chicago who is doing a project with a local consumer group looking at the effects of their anti-stigma program on high school kids.

I'm randomizing the two different groups. Consumer—or students doing master's theses and doctoral dissertations are—if you can get them to participate are really wonderful resources.

If for example you got a student of mine, you get not only them doing their dissertation but you get me making sure the dissertation is correct or similarly elsewhere in the country.

Also to put this in perspective, I have about one doctoral student a year working in this area so it wouldn't be a lot of people.

Thirdly is whether or not people should think about adding a line for evaluation when they're seeking support from, for example the States. We're doing a project in the State of California where they provided some resources to the program we're looking at to collect evaluation data.

And for that matter I think we ought to look at some sort of technical support program. I don't know if this would be part of the ADS Center or elsewhere, but, that would provide support to you, information to you, as you put together an evaluation program.

(Carcita): Oh, thank you. I am a consumer by the way.

Pat Corrigan: Bravo.

(Carcita): Yes, I am not ashamed.

Pat Corrigan: Actually I am too.

(Carcita): Oh amen.

Pat Corrigan: Yep.

(Carcita): Okay, thanks for your assistance, appreciate it.

Operator: Beth, your line is open.

Beth Bowsky: Hi, this is Beth Bowsky, I'm with NAMI, and I wanted to ask you, the Attribution Questionnaire, has it been validated and has it been looked at in terms of cultural biases?

Pat Corrigan: Has it been evaluated and has it been looked toward in terms of sexual biases?

Beth Bowsky: Cultural.

Pat Corrigan: Cultural biases. It has been evaluated; I believe that this slide show has some references for us that you can click through. Again, you can email me and we could send you a list.

The Attribution Questionnaire has been evaluated on several different groups. Culturally, the problem with much of the research on mental illness stigma is it's done on middle-class European Americans.

So this has not been written specifically for such important groups as African Americans or Latinos or Asian Americans. I can tell you we did a qualitative study on African Americans on the west side of Chicago with their views, a stigma of mental illness, and came up with some noted differences.

One of the big ones that people in our survey said is that they thought seeking out services for mental health problems was letting their church down and that they should turn to spirituality and their minister as one way of dealing with these kind of issues.

Now that's a qualitative study with a handful of people, but it does say to me that ethnic issues are important, and I do not know at this point any kind of evaluation instruments that specifically took on that issue.

Beth Bowsky: Thank you.

Coordinator: Robert, your line is open.

(Robert Davis): Thank you. This is (Bob Davis) with the Eastern Regional Mental Health Board in Norwich, Connecticut, and I'm concerned about normative bias. My—one of my principal audiences is city councils and zoning boards.

And they know what the right answer is supposed to be when we talk about stigma and discrimination. They know that discrimination is against the law. So they disguise their opposition in terms of technicalities of zoning regulations and things of that sort.

We actually had one Federal court case that exposed that and that was—and they remember that. But aside from that, do either of you have any advice on how to deal with that kind of problem?

Pat Corrigan: Well one thing you said that's really a concern about these measures is the idea of social desirability, is if you present to me mental health stigma questionnaire twice, I'm going to figure out I better look better on it.

(Robert Davis): Exactly.

Pat Corrigan: That said, the Attribution Questionnaire is sensitive to that. If you want, we can swap an email about how that works, but in essence it's an issue of statistical interaction.

So, it can show differences in effects on top of the social desirability issue. But the other issue, you really mentioned that's important is again we're only measuring attitudes and what we're more interested in is what these councils, members of the council or legislators, are doing.

We did one study that was in psych services that looked at—we actually searched Nexis and Lexis for every law produced by States in a 10-month period. I think we did this a couple years ago.

We found out some interesting trends, which States are promoting positive legislation versus less so for people with mental illness, and we did the exact same study for newspapers and got some trends on what places around the country are supporting—are more positive about representing mental illness.

So the harder the question and its recurring usage, the harder the question the more difficult the analysis becomes, the more effort it requires.

(Robert Davis): Well thank you. I will write to you, for a number of reasons. Thank you.

Pat Corrigan: Mm-hmm.

Holly Reynolds Lee: Thank you, And I think we have time for just one more question before the end of the call.

Coordinator: (Karl), your line is open.

Man: He'll be right back hopefully.

Man: Hello?

Coordinator: Yes, please go ahead with your question.

Woman: Hello?

Man: Hello?

Coordinator: (Karl), your line is open.

(Gene Torres): I have a question. This is from (Gene Torres) for (Karl) at mental health clinic in Menlo Park Veterans Administration, California. And I was wondering about the professional diagnosis of stigma.

What type of person—these people, do they need medical diagnosis for stigma or do they have to a have a personal commonness with the brochure or something that came out in a flyer or newspaper or anything that they have stigma?

Would it be their own personal opinion?

Pat Corrigan: We're looking mostly at people's personal opinions and how they buy into prejudice against people with mental illness.

(Gene Torres): Okay thank you. That's my question; thank you.

Holly Reynolds Lee: Well thank you everyone, to our presenters as well as to the people who asked questions and participated on the call. This concludes our training today. If you missed all or part of the call please access our website, promoteacceptance.samhsa.gov, for playback instructions and downloadable materials.3

Thank you again for your interest and your participation. Again, I encourage you to participate in the anonymous survey when you receive it. Thank you and goodbye.

Man: Hello?

Holly Reynolds Lee: Hello? Jon, Pat, you guys still there?

Coordinator: I'm going to go ahead and place your lines into a pre-conference, one moment.

Holly Reynolds Lee: Thank you.

1 Parentheses in this transcript indicate places where what the speaker said is unclear in the recording of the teleconference (or, when names of people asking questions are in parentheses, when the spelling of the person's name is not known).

2 Since this teleconference, the Chicago Consortium for Stigma Research has evolved into the National Consortium on Stigma and Empowerment (NCSE). The NCSE's website is at http://www.stigmaandempowerment.org.)

3 You no longer need playback instructions, as audio is now available at the ADS Center website.