A common fallacy is to assume that if your business or organisation is brilliant at what it does, and it’s something which creates social impact, your ideal customers will come knocking on your door.

Any responsible finance providers (community development finance institutions) which fall into this trap aren’t alone.

Plenty of tech startups, app developers, restaurants, clothing businesses, many social enterprises and other organisations do too. Ultimately some fail simply because they don’t bother to proactively market themselves.

And responsible finance providers can easily be outspent by unethical high interest lenders able to put enormous resources into communicating persuasive messages which make it appear they have their customers’ best interests at heart.

Consumers are besieged by more and more marketing messages. Responsible finance providers need to tell people they exist, and ensure the communication they do is effective and engaging.

Whilst your best source of customers as a responsible finance provider might well be through word of mouth recommendations from existing or former customers, you need a process to facilitate these referrals.

You also need to attract potential new customers – those individuals, social enterprises and businesses which cannot access mainstream bank finance yet may be an ideal fit against your lending criteria – and if they don’t find you, perhaps they’re at risk of exploitation by unscrupulous lenders. And you need to invest effort in building this pipeline, rather than assuming that people will simply discover you.

One effective part of a marketing strategy can be to encourage online reviews. These encourage trust and word of mouth referrals. Another is to encourage feedback from people through customer surveys, since potential clients then get to see that you lend to and support “people like them.”

Many RF providers also use the results of their customer and stakeholder surveys within their Impact Reports (and perhaps to illustrate their performance against the UN Sustainable Development Goals).

Survey results (and research findings) can also be used to get your organisation into the press. But a meaningful survey isn’t only written to generate positive outcomes.

So how do you design an effective and meaningful customer survey?

First of all, what is “meaningful”? When you’re planning a survey there needs to be a point in mind.

Do you care about customers’ feedback, will you learn from it, are you prepared to implement positive changes in response to some feedback? Hope so.

Do you want feedback from stakeholders to facilitate partnership development? Do you know which audience you are asking feedback from, and why you are asking them questions?

Figure out the purpose of your survey first – and don’t try to combine research into two or more different customer segments into one survey.

Some principles of effective survey design:

Is each question necessary and will the people you are asking be able to answer it in a meaningful way? The less questions you can ask in your survey, the better the participation rate – you want to make it as easy as possible to take the survey. Get a lot of possible questions down on paper, and red pen them.

That said, it can be good to have a triangulation question. For example, you might have a question about one topic with a yes/no answer and another about the same issue with a mark out of 10 answer to help you to understand participants’ priorities.

Avoid ambiguity and vagueness. Be concrete with the words you use. What do you mean by “excellent” or, for instance, by “sometimes,” “often” or “frequently” as answer choices. Here’s an example. Let’s say you wanted to ask customers, “How often were you overdrawn in the last year?” And your potential answers were:

Never | Rarely | Often | Frequently.

These (real examples) are too vague. Much better (more specific and unambiguous) options would be:

Never | Less than 6 times | Once per month | More than once per month | I’m always overdrawn.

Ideally give each question a small number of choices.

Don’t ask two questions in one. For example, don’t ask people to rate the “quality and speed” of your response to their enquiry.

Another frequent issue when people are building surveys can be deliberate – if you are trying to bias your results (I’m not recommending this!), or by mistake, if you’ve not been to survey school! This is to have an unequal number of positive and negative response options. For example: the question “How satisfied were you?” with the following answers:

Completely satisfied | Mostly satisfied | Somewhat satisfied | Neither satisfied nor dissatisfied | Dissatisfied.

This has three positive choices, one neutral and only one negative choice (and the language is vague).

Better, if you want a meaningful response, would be to ask “How satisfied or dissatisfied were you?” (this question itself is less biased, as it’s not leading to an assumption of satisfaction) and to give these possible answer choices, in which the positive and negative choices are equally balanced and are mirror images of each other:

Completely satisfied | Somewhat satisfied | Neither satisfied nor dissatisfied | Somewhat dissatisfied | Completely dissatisfied.

(The word “somewhat” in this case is still a bit vague, but is used in both the positive and negative choices).

It’s also good practice where possible to distinguish “undecided” people from “neutral” people – which we did not do in the last example.

Here’s one way of doing this: Ask a question such as “Do you agree or disagree with this statement, (then give a statement about your organisation) and give choices:

Completely agree | Agree | Neither agree nor disagree | Disagree | Completely disagree | No opinion.

This question format can work well in a grid with multiple statements; if you are running the survey using a (paid-for) tool such as SurveyMonkey this is easy to do.

There are some schools of thought which say you should force a choice; in some surveys folk have found respondents gravitate to a neutral option. When I have removed the neutral option from surveys I have conducted, I have kept in a “not relevant” option.

It’s helpful to have a few questions in which people make these simple choices before you ask them anything open ended in which they have to write up a response. It is good to ask “why” your customers value you, or what people value most – and give an open ended option – but warm respondents up first.

It’s also good practice to do a survey which is replicable. In other words, let’s assume you will survey your customers this year, next year, and so on, and want to make comparisons to see whether levels of satisfaction with different aspects of your service have changed. In that case you will need to ask the same questions in the same way next year as this year.

Using the Net Promoter Score (NPS, and a trademarked metric) can be helpful in order to make year-on-year comparisons between survey responses – a great way to demonstrate changes. This is calculated based on responses to one simple question, “How likely is it, out of 10, that you would recommend [your organisation] to a friend or colleague?” Next to Zero, there’s usually the answer qualifier “I would definitely not recommend” and next to 10, “I would definitely recommend.”

I bring this (Net Promoter) question in about three quarters of the way through a survey, and tend to ask an open ended question afterwards about why they have given their score. Although NPS has its detractors, particularly around its labeling of people who score you as 7 or 8 as “passive,” I think in context and conjunction with other questions in your survey it makes for a useful and valuable benchmark for meaningful comparisons year-on-year.

Test your survey: build it in a tool that has analytical and design capabilities (I use a professional SurveyMonkey account which I pay for to run surveys with the organisations I work with); and before you launch it get your staff and some stakeholders to preview and test it too.

Participation

Some basics on survey design there, but how do you get people to participate? Well don’t be shy of asking them, ideally several times if you are emailing them. I’ve had brilliant results with emailing people four times: once on the day the survey opens, again when you are halfway between opening and closing dates, then the day before the survey closes and finally at noon on the day the survey closes.

Do of course only contact people in accordance with your data use policies and the permissions they have given to you; in the first email tell them that you will be sending three follow up emails to encourage customers’ participation in the survey and they can opt out of these emails (only) if they want to (this will stop people from unsubscribing from all your emails if they are annoyed about being chased to participate in your survey).

And dependent on the email provider you use to contact your customers, you may be able to apply a tag or rule to people who have participated so they don’t get chased up after they have taken part.

Make a record of the pattern you have used in terms of emails to your customers asking them to take part, because it will be important to replicate it in future years if you want to be able to make meaningful year-on-year comparisons between results.

When I’ve been involved in impact snapshot or impact reporting research I have found that a small incentive to participate can be highly effective. I’ve also found that a relatively short deadline to participate (say a two week window) has generated a better response rate.

What do you do next?

Do thank people for taking part! Communicate that you are taking comments and feedback seriously and anything you will change or improve as a result.

Engage and involve your customers and stakeholders.

If your survey has quantified your impact and changes attributable to your organisation, include the results in your impact report, perhaps publish them online, make a video about them, and consider telling the media about your results (news, after all, is change which is relevant to a journalist’s audience).

Online reviews

Quite a few responsible finance providers receive (generally excellent) comments through Feefo. As a closed platform, Feefo users invite customers to participate (so people who are not your customers can’t leave reviews; this is an important consideration for the responsible finance providers I have spoken with which use it).

Some do use Trustpilot, one of the biggest online review platforms. It’s free to use, up to a point, but much of its functionality comes through its pay-for options which can be expensive (for example, embedding some of your reviews into your website). Another big concern which several responsible finance providers have mentioned about Trustpilot is that people who are not your customers can leave reviews (and there have been issues reported about problems of fake reviews in industries connected to financial services.

nistalaarun@gmail.com
nistalaarun@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *