Research methods: Statistical Analysis
Our specialist in-house research team is able to conduct advanced statistical analysis. Whether this is to cluster your audience into distinct segments, to uncover the key drivers for product recommendation, brand perceptions or purchasing habits, or to simply explore brand reach.
We are experienced in analysing data beyond standard data tables, using advanced statistical techniques and machine learning methodologies to unearth hidden insight to help drive key business decisions.
Conjoint analysis: A brief guide
Do you ever wonder how your product fares against market competitors? Or imagine how a small tweak to current product features could affect customer preference?
If so, these questions could be answered via conjoint analysis, a powerful research technique that allows you to establish the importance of product features such as pricing or brand on consumer preference. Answering how do changes to product features affect market share? And how do these features combine to therefore affect consumer decisions?
What is conjoint? An example-based approach:
By definition conjoint is a research methodology that assesses product preference and helps to determine which combination of product features people find most attractive.
The basics of this technique can be surmised when looking at how consumers shop for a product, let’s take a desktop computer as an example. This product has several attributes which we consider when making a purchase, these can be brand, price, RAM and screen size to name a few and each of these attributes can have different levels for example a computer could have 8GB, 10GB or 12GB of RAM.
Conjoint analysis aims to understand the interaction of these levels on product preference. This is by calculating preference scores for each of the levels (called part worth utilities), these can then be combined to produce overall product preference scores. The result is that researchers and marketers can create “what if” scenarios (using our in house simulator tool) whereby different products can be trialled and tested against one another. Allowing the trade-off of existing products against market competitors, how new products will fair in the market place and how modifying existing products can gain a competitive advantage in terms of market share (share of preference).
Example of simulator tool:
Other key outputs include the attribute importance scores and part-worth utility scores. The attribute importance scores show the relative importance of the attributes, each attribute is assigned a value ranging from 0 to 100 which collectively sum to 100. If price has an importance score of 40 and RAM has an importance score of 20 we can infer that price is twice as important in the decision making process than RAM.
Example of attribute importance scores:
The part-worth utility scores give insight into the relative importance of each of the levels within an attribute, these values are often centred at 0 and are displayed in the bar chart below. Here we can see that 16GB of RAM is preferred to 8GB and 12GB of RAM when considering buying a desktop computer.
Example of utility scores for RAM:
Choice -based conjoint:
One variant of conjoint is called choice based conjoint which involves presenting respondents with an array of different package permutations (i.e. varying product levels) and asking them to select their preferred concept from a selection on screen. The idea is to trade-off products in a manner which simulates a typical purchase decision whereby the different levels are designed to alternate in a balanced manner, avoiding bias and allowing us to understand which features are most important to buyers and also determine the relative importance of each level within each attribute.
Example of choice based conjoint survey:
We have an in-depth knowledge of conjoint analysis, allowing us to advise on the do’s and don’ts to deliver the best insight as well as understanding the variant of conjoint which is best for your project. We are experienced programming conjoint into our surveys having worked with many clients over the years implementing this technique, as well as this experience our in-house team of statisticians will work alongside the research team to walk you through the project set-up and utilise the outputs of this research method.
MaxDiff analysis and preference testing:
DRG offers MaxDiff analysis that can be easily integrated into your research providing preference/importance scores for multiple items such as for product features, concept design and brand preference.
A brief guide:
MaxDiff (otherwise known as best/worst scaling) is an alternative to measuring preference via methods such as rating scales or ranking questions. Rating scales often don’t discriminate well between items meaning respondents can become fatigued with long lists and results don’t accurately represent the strength of preference between items. Ranking questions also do not work well with long lists, whilst the order of importance is provided in the output the strength of preference between items is not.
MaxDiff solves these issues as respondents trade-off their best and worst options from sub-sets of items across multiple screens, simulating a more realistic decision making task. Due to the discrete response options presented to respondents, MaxDiff does not suffer from scale bias (i.e. differences in scale interpretation between respondents). Since only a sub-set of items appear on each screen, this method can handle long lists of items, breaking the task into more intuitive bitesize chunks for respondents to answer (typically showing 4 or 5 items per screen). The output from MaxDiff provides both the order and strength of preference, ultimately giving more accurate and extensive insight.
On-screen example of MaxDiff:
At DRG our in-house statisticians carefully design the MaxDiff questioning framework, ensuring that it is balanced, efficient for each respondent and unbiased towards any items tested. The design stage is the foundation of creating robust and accurate results, therefore we tailor our algorithmic set-up to your research needs, as well as having experienced researchers on-hand to guide you through the design stage, with the core aim to provide highly accurate and meaningful data outputs.
We utilise machine learning techniques, rather than simply using counts analysis, to provide stable unbiased models that also add flexibility to your results – meaning you can visualise the strength of preference for each item tested at an aggregate level, but also apply filters to the data. To understand item preference for specific sub-groups in your target market, we can add up to 10 breaks in the tool output expanding the capabilities and insight that can be derived from the MaxDiff analysis.
MaxDiff results give an output for each item tested on a scale of 0 to 100, if an item has a preference score double to that of another item it can be inferred that preference is twice as strong.
We offer brand mapping using correspondence analysis to uncover how your brand is perceived in the market place. How strongly is your brand associated with the key features / traits that compose your market? Is your brand distinguished from competitors or does it fall into a highly competitive brand space? And how can your brand strategy be altered to gain competitive advantage? These are some of the insights that brand maps provide and as the results are plotted in two dimensions the output is clear and intuitive to interpret.
Brand maps can be broken down via any sub-population of interest, allowing you to visualise differences in brand perception between, for example, your core consumers and the wider market or a range of other metrics. As well as this brand zones can be identified where features and brands cluster forming distinct market areas, this can uncover latent areas and opportunity where particular areas of the market are under-serviced or where few brands operate.
Total Unduplicated Reach and Frequency (TURF) is a technique used to understand the optimal combination of products to reach the greatest proportion of the market possible, given a defined number of products to enter the marketplace.
Why use TURF?
TURF can be used to capture a new audience which your product line may not currently reach, to help decide which products to develop and bring to market, as well as to understand the impact when adding a new product to a current line and to help make use of a limited budget or shelf-space in order to maximise sales and improve audience reach.
Why use DRG to conduct TURF?
We implement both forms of TURF in our analysis; the ‘Greedy method’ and the ‘Enumeration method’ to find the optimal solution, but also to observe the tipping point where market reach plateaus.
Gabor-Granger Pricing Method:
The Gabor-Granger pricing method measures the willingness of consumers to pay for your product at a set range of price points (otherwise known as the price elasticity). This technique provides insight into consumer price tolerance and helps to inform which price point will optimise market reach.
A plot of the ‘price elasticity of demand’ is shown below, this shows the proportion of consumers that would consider purchasing a product at each given price point.
Price points are kept equally spaced and are designed to capture a realistic price range – focusing on adapting pricing rather than re-inventing a price strategy. A more price sensitive market is defined by a steeper demand curve, whereby with increasing price – demand decreases more sharply.
Predicted revenue vs price charts are also useful to visualise the effect of price on predicted revenue, meaning optimal price points can be uncovered to maximise business revenue.
Want to know more? Submit an enquiry using the form or call us on 01434 611160 and a member of the team will be in touch.
Explore What We Do
“Public Knowledge delivered a professional, friendly and responsive service throughout the duration of the project. From the initial recommendations of research methodology to fit my client’s budget, to providing feedback on questions and presenting findings, they were a pleasure to work with. My tight deadlines were met without any problems and reports provided were easy to understand and provided a great insight from which to make decisions. I’ll definitely work with Public Knowledge on future research projects.”
Director | DTW