Project Vase
“Victims As Social Evaluators”
VASE is a short form for “Victims As Social Evaluators”. The VASE study is conducted in the unstable and transitional societies with HRV*/Disaster/Conflict multi-victims trained as social evaluators.
The VASE local capacity is developed using extensive training workshops among selected multi-victims, using a structured questionnaire, which is designed on SEED (Socio-Economic- Environmental-Democracy) related issues.
The research study is designed in consultancy with the multi-victims and the various stakeholders in multiple rounds of qualitative interviews, focus groups and pilot surveys.
The key points of VASE projects are:
1. Training and generating local capacity for research surveys
2. Putting victims themselves as social evaluators
3. Understanding the problems through the eyes of those who are at the receiving end along with the stakeholders
4. Understanding the probable solutions from the suffering population
5. Making the community communication a two way process
Rationale of the project
There is historically no linear succession in transitional & unstable societies between the activities of relief, rehabilitation and development, which are most often carried out at the same time by different agencies and different personnel, following different procedures.
However there is an agreed understanding of the linkage between the three, that “better ‘development’ can reduce the need for emergency relief; better ‘relief’ can contribute to development; and better ‘rehabilitation’ can ease the transition between the two”’
In the aid policy debate, as described by Margie Buchanan-Smith, two main reasons for the importance of linkages are given: one is management, and concerns the developmental quality and sustainability of aid: how can one safeguard the assets generated in a humanitarian assistance programme that often acts as a substitute for public services? More recently a claim is being made of the positive influence of linkages on security and human rights issues. The aim is increasingly to reduce vulnerability and disaster risk, avoid increases in poverty, and prevent or at least minimise long-term socio-political marginalisation and discrimination.
It is generally agreed that by creating linkages between immediate humanitarian assistance and more long term rehabilitation and development, not only lives but also livelihoods will be saved. The VASE evaluation highlights both the beneficial and the negative influences which are interwoven in the fabric of humanitarian crisis response, and their links with longer term structural efforts in the fields of governance, development aid, corporate investment, diplomacy and even security.
In the normal working environment, we know from the past surveys the expected response rates of the normal society. Hence, in normal circumstances, the agencies use researchers on per completed questionnaire basis. In the normal circumstances, we also know practically how many interviews one researcher can do during a day’s fieldwork.
BUT, the same concept is practically unfeasible in an unstable or conflict ridden or in any post-disaster scenario. Here, all the previous equations go for a toss. There is no more any established trend of any sort of population data and the researchers have to travel in great difficulties to reach the desired destination. This significantly reduces the effective working hours in the field for the researchers.
The problems just don’t stop here. Even if the researchers manage to reach at the destination well within time, the response rate actually takes a nosedive. This is because of the fact that many people randomly selected on the population list are dead or relocated. Even in the replacement sample, the respondents are shaken by the situations, which are so overwhelming that many a times they just refuse to talk.
Our experience of polling in unstable/conflict areas across the globe have taught us that it takes a lot of persuasion, time and energy to make the respondent come out of the ‘fear-factor’ or ‘stress-factor’ and actually ‘talk’. This further reduces the quality of interviews. This is where VASE has positioned itself as the solution.
Framework of the field team
The selected VASE team leaders are first given extensive training by our experienced fieldwork experts in a unique “train the trainer” program. Minimum one team leader is appointed in a team of 5 VASE researchers. In a typical team about 50 odd researchers and 10 team leaders are in the field capped by the presence of two central observers.
Developing the local capacity
As mentioned above, the survey fieldwork is done by local VASE researchers selected and trained by Team CVoter professionals. We train the local capacity in the SEED by selecting the suitable Multi-victims during the qualitative research process. These researchers are given training in an extensive research survey workshop spanning over a week and also the experience certificates after the successful completion of the assignment. In previous VASE assignments many of them have been consumed in local economy as various social agents.
Absence of regular surveys
It is important to notice here that the entire process of training workshops and time, energy and resource consuming, could be treated as stand alone assignment for developing the local capacity. At the same time it is ironical that generally in these transitional societies, due to absence of longitudinal surveys with timeline analysis; these local capacities could not be taken into service again, thus in effect letting the capacity go down the drain.
Rooster of trained local capacity
Generally speaking, most of the previous trained manpower moves along with time and after our assignment join various occupations. So if the gap between two studies is substantial; the entire exercise is to be done afresh. However, it is proposed that a rooster of such trained personnel could be maintained in order to make them available to all other evaluation teams in future so that conducting quantitative surveys becomes a bit easier to the stakeholders. It is also useful for the local capacity as they would be regularly used and rewarded for a skill that they have learnt.
Questionnaire Development
We periodically revisit the questionnaire developed and used in all SEED projects. Many of the elements are repeated in order to see the change along with the timeline as well as across various socio-cultural contexts. New questions according to the scope of study mentioned in the ToR are added in the existing questionnaire. The questionnaire is fine tuned after the observations of the expert’s team, which would visit the transitional areas for qualitative study. As a result; the questionnaire is supposed to have about 50% common grounds covering quality of life aspects and 50% localized issues and aspects.
Translation policy
The English questionnaire is translated in local languages. As per the WAPOR/ESOMAR and ISSC quality norms; the questionnaires are reverse translated in to English by different set of translators to ensure the correctness of the content and context.
Sample frame
A detailed and fresh location list of affected population is obtained in the selected district and about micro locations are randomly selected from that list based on sampling grid. A sample is to be allocated for each micro-location and the respondent households are selected randomly from the HRV/Disaster/Conflict affected population.
Sample selection
After the selection of the household; a routine kish-grid or the last birthday method could be applied in order to select the respondent. These methods include listing of living and present members of the household.
Socio-Economic classification
In households; where the main wage earner lost his/her life in HRV/Disaster/Conflict; the previous wage earners occupation is recorded along with the present wage earner’s occupation.
Response Rate
The expected response rate differs in different societies due to socio-economic as well as geographical differences in the local working environment. As per our previous experiences, we expect a higher response rate in VASE studies than studies using “normal” researchers (around 25%). In all VASE studies so far, we have reached a minimum 80% response rate, which could be considered well above average as per the international standards in “Normal” population using “Normal” research teams.
Sample replacement
As in the normal research, most the questions are generally applicable on the status of the household instead of the individual; a replacement option is to be given within the same household; provided the respondent is not a minor (unless specifies in context of the research).
Quality criteria
In order to maintain the quality of the fieldwork and ensure maximum dispersion of the sample within the selected micro location, the VASE enumerators are allowed to conduct only a maximum of 10 interviews per day. Within a given macro location, the team leaders is advised to back check 10% respondents from the completed list.
Electronic data processing
The initial EDP is done in assignment areas by VASE researchers. Once the field work is over, they are given another round of training in computer data entry. The final EDP is done with 20% back check of data entry. The final data screening & analysis is done using the SPSS in CVoter HQ based in New Delhi.
Code of conduct
As required by the Code of Standards of the Council of WAPOR, we maintain the anonymity of our respondents. No information is released that in any way will reveal the identity of a respondent. Our authorization is required for any publication of the research findings or their implications.
Recent VASE assignments
Case Study: Indonesia
VASE researchers: 42 ex-GAM Female militia fighters
Research: Acehnese peace process and its impact on relief operations
Case Study: Maldives
VASE researchers: 20 Male and 20 Female Multi-victims
Research: Democratization and its impact on tsunami relief operations
Case Study: Kashmir
VASE researchers: 62 Male and 78 Female conflict-victims
Research: Perceptions for peace building across Kashmir
Case Study: Congo
VASE researchers: 23 Male and 35 Female conflict-victims
Research: Perceptions on the democratization process
Case Study: Bihar (India)
VASE researchers: 22 Female multi-victims in red-light areas
Research: Socio-economic indicators and quality of life of sex-workers
Case Study: Palestine
VASE researchers: 25 Male and 25 Female conflict-victims
Research: Perceptions on the peace process and democratization
Case Study: Gujarat (India)
VASE researchers: 20 Male and 20 Female riot-victims
Research: Trust in public institutions after communal riots
Case Study: Rural India
VASE researchers: Over 100 Male/Female drought-victims
Research: Farmers suicides and issues of agricultural loans
The VASE surveys are a bit expensive…
Why we calculate the costs the way we calculate the costs…
While we are going through figures in order to understand the costs of quantitative research, one really important issue for all of us to understand is costing of the non- response sample or the incomplete sample. There is one HUGE difference between our costing of per-sample rate and other international professional agencies calculating the same. It is standard practice in the industry to include the non- response in the sample size while charging the client. On the other hand, our practice is to charge ONLY for the COMPLETED interviews.
In other words, we do not charge for the non-response or the incomplete interviews. So if we revisit the Kashmir or LRRD2 or Haiti costing, one can see that we have planned actually to contact much bigger sample than what we are actually charging for.
For example in Haiti we planned to contact 3000 respondents with a modest response rate of 50% making a target sample size of 1500 COMPLETE interviews. So, looking at the gross sample size, one will realize that we are charging only half the samples that we are actually doing. If we are lucky, the response rate might go up and we might end up getting 2000 completed interviews, but even then, the stakeholder will be charged only for 1500 complete interviews.
Why?? Stakeholder should ask that…we can explain.
A generalized thumb rule we have developed from our past experiences is, that if any researcher can interview 10 respondents in one single day in any NORMAL society, then the same researcher, with the same questionnaire, in any UNSTABLE society, should be lucky even to interview 5 respondents.
This is precisely why, in any NORMAL society, our cost structure in paying the researchers on the ground may be on the basis on per completed interview, BUT in a TRANSITIONAL society we make sure to pay the researchers on daily fee basis. This is simply because in these circumstances he/she can only try his/her best, BUT, the number of completed interviews is completely out of their control.
Now, imagine that these researchers while working in an unstable environment are actually running an extra mile, doing that extra effort, taking that huge risk on their security that any researcher working in a normal society would never do even in their wildest of nightmares. And having stated that, if we start paying them on per completed interview basis, it will be a gross injustice to them, simply because in such adverse conditions, the number of completed interviews is not in their control. This is why we pay them full, appreciating their efforts, but charge the clients on per sample basis. We take the minimum required sample as target sample size, and work on the gross sample size.
By doing this, we certainly take a risk. If we MEET the target, the researchers get full money and we post some profit. But if we MISS the target, the researchers still get full money but we bear the loss, as the client cuts our payment. And if we are really lucky to go beyond the target of completed interviews, then the researchers get full money, the client gets much more without spending a single penny extra. In that case our margin comes down because our data processing expenditure increases due to the surplus samples.
So, the moral of the story is that while working on a survey in the risky unstable environment, the thumb rule is that the client is never at risk. That leaves either researchers to be at risk or the survey agencies to be at risk.
We find it ethically and morally incorrect to put the local researchers at risk. That ultimately leaves only us at risk. We know this doesn’t make a business sense, but then we are convinced in our policy of putting field researchers on priority.
We sure do want to post profit like any other survey agency, but not at the cost of our field workers. We had taken this policy decision before getting in the conflict and disaster assignments, being fully aware of the risks, and we are really happy and satisfied that we did it that way.
Can we treat this as CSR in our business itself??
click here to download brochure.