6 items tagged "survey"

  • Just Using Big Data Isn’t Enough Anymore

    feb16-09-603756761-1024x576Big Data has quickly become an established fact for Fortune 1000 firms — such is the conclusion of a Big Data executive survey that my firm has conducted for the past four years.

    The survey gathers perspectives from a small but influential group of executives — chief information officers, chief data officers, and senior business and technology leaders of Fortune 1000 firms. Key industry segments are heavily represented — financial services, where data is plentiful and data investments are substantial, and life sciences, where data usage is rapidly emerging. Among the findings:

     

     

    • 63% of firms now report having Big Data in production in 2015, up from just 5% in 2012
    • 63% of firms reported that they expect to invest greater than $10 million in Big Data by 2017, up from 24% in 2012
    • 54% of firms say they have appointed a Chief Data Officer, up from 12% in 2012
    • 70% of firms report that Big Data is of critical importance to their firms, up from 21% in 2012
    • At the top end of the investment scale, 27% of firms say they will invest greater than $50 million in Big Data by 2017, up from 5% of firms that invested this amount in 2015

    Four years ago, organizations and executives were struggling to understand the opportunity and business impact of Big Data. While many executives loathed the term, others were apostles of the belief that data-driven analysis could transform business decision-making. Now, we have arrived at a new juncture: Big Data is emerging as a corporate standard, and the focus is rapidly shifting to the results it produces and the business capabilities it enables. When the internet was a new phenomenon, we’d say “I am going to surf the World Wide Web” – now, we just do it. We are entering that same phase of maturity with Big Data.

    So, how can executives prepare to realize value from their Big Data investments?

    Develop the right metrics.

    While a majority of Fortune 1000 firms report implementing Big Data capabilities, few firms have shown how they will derive business value over time from these often substantial investments. When I discuss this with executives, they often point out that the lack of highly developed metrics is both a function of the relative immaturity of Big Data implementations, as well as a function of where in the organization sponsorship for Big Data originated and where it currently reports. Organizations that have the executive responsible for data report to the Chief Financial Officer are more likely to have developed precise financial measurements early on.

    Another issue with measuring the effectiveness of Big Data initiatives has been the difficulty of defining and isolating their costs. Big Data has been praised for the agility it brings to organizations, because of the iterative process by which they can load data, identify correlations and patterns, and then load more data that appears to be highly indicative. By following this approach, organizations can learn through trial and error. This poses a challenge to early measurement because most organizations have engaged in at least a few false starts while honing Big Data environments to suit their needs. Due to immature processes and inefficiencies, initial investments of time and effort have sometimes been larger than anticipated. These costs can be expected to level off as experience and efficiencies are brought to bear.

    Identify opportunities for innovation.

    Innovation continues to be a source of promise for Big Data. The speed and agility it permits lend themselves to discovery environments such as life sciences R&D and target marketing activities within financial services. Success stories of Big-Data-enabled innovation remain relatively few at this stage. To date, most Big Data accomplishments have involved operational cost savings or allowing the analysis of larger and more diverse sets of data.

    For example, financial firms have been able to enhance credit risk capabilities through the ability to process seven years of customer credit transactions in the same amount of time that it previously took to process a single year, resulting in much greater credit precision and lower risk of credit fraud. Yet, these remain largely back-office operations; they don’t change the customer experience or disrupt traditional ways of doing business. A few forward-thinking financial services firms have made a commitment to funding Big Data Labs and Centers of Excellence. Companies across industry segments would benefit from making similar investments. But funding won’t be enough; innovating with Big Data will require boldness and imagination as well.

    Prepare for cultural and business change.

    Though some large firms have invested in optimizing existing infrastructure to match the speed and cost benefits offered by Big Data, new tools and approaches are displacing whole data ecosystems. A new generation of data professionals is now emerging. They have grown up using statistical techniques and languages like Hadoop and R, and as they enter the workplace in greater numbers, traditional approaches to data management and analytics will give way to these new techniques.

    When I began advising Fortune 1000 firms on data and analytics strategies nearly two decades ago, I assumed that 95% of what was needed would be technical advice. The reality has been the opposite. The vast majority of the challenges companies struggle as they operationalize Big Data are related to people, not technology: issues like organizational alignment, business process and adoption, and change management. Companies must take the long view and recognize that businesses cannot successfully adopt Big Data without cultural change.

    Source: Harvard Business review

  • Measuring competitive confidence: what to ask?

    Measuring competitive confidence: what to ask?

    Running a competitive confidence survey will soon be a matter of course for competitive enablement programs.

    Measuring sales confidence levels in your compete program and against your competitors will give you the kind of measurable insights you need to lift the entire program.

    But in order to get to a place where competitive confidence becomes your most important KPI, you’ll need to thoughtfully craft a competitive confidence survey. And that starts with asking the right questions.

    Here are three categories of questions you should include in your competitive confidence survey, and some sample questions to get you going.

    Category 1: Assess Sales Confidence within your competitive landscape

    Ever-changing and evermore competitive, your sales reps are often the front line for identifying when a new competitor enters into the fray.

    The questions in this category are designed to understand general levels of confidence against any and all competitors.

    Questions to ask

    • How often do you come up against a new competitor in deals?
    • How often do you come up against any competitor in a deal?
    • Generally, how confident are you in de-positioning competitors?

    Category 2: Sales confidence against your top competitor(s)

    While some companies can safely say they have dozens of tier-one competitor, the questions in this category are meant to hone in on your top or two.

    One of the most important ways a competitive confidence survey boosts your compete program is by helping you prioritize your time and effort.

    If sales confidence against your top competitor is high, that might be your cue to move on to different competitors.

    Questions to ask

    • In deals, how often do you come up against [top competitor]?
    • How confident are you in de-positioning [top competitor]?
    • Beyond [top competitor] who do you think is the most important to focus on?

    Category 3: Sales confidence in your competitive enablement program

    How confident in your competitive content are the teams you’re enabling? It’s probably the most crucial question competitive enablement experts need to be asking themselves.

    If the results are lower than expected, don’t take it personally. Instead, use the results as a way to help understand what content is most useful in deals and where to prioritize your efforts.

    Questions to ask

    • How would you rate our team’s competitive content from 0-5?
    • How often do you rely on competitive content in competitive deals?
    • What is the type of competitive content you most reference?

    Putting the results into action from your competitive confidence survey questions

    These three categories of questions in your competitive confidence survey each has a different purpose. And each purpose relates to different actions you can take from the results.

    Finding out that your sales team lacks confidence against competitors in general could signal a need for more training. Just like a team lacking confidence against your top competitors might suggest a need for better positioning and messaging from your product marketing team.

    And of course, if your sales teams lacks confidence in the competitive content you’re providing, you should take in that feedback and get into action.

    That way, by the next time you run the survey, you’ll be able to see the progress you made and show off your competitive enablement team’s value across the entire org.

    Author: Ben Ronald

    Source: Klue

  • Optimize your survey design through iterations

    Optimize your survey design through iterations

    In a market starved for survey participants, designing surveys that encourage participation is key. This article describes how to optimize your survey during fieldwork by making small, incremental iterations based on what the survey metrics are telling you.

    In a market starved for survey participants, one of the hottest topics in market research these days is earnings per click (EPC). EPC measures how much sample suppliers in a marketplace (e.g. Lucid, Cint) can expect to earn per respondent sent into your survey.

    EPC is used to determine the cost per interview (CPI). It is also a measure of survey health for sample providers who monitor EPC during fieldwork. From a sample supplier’s standpoint, studies with low EPC are more difficult to monetize, so it is to their benefit to direct traffic to surveys that are easier to complete such as those that have a higher incidence rate (IR) or shorter length of interview (LOI).

    It wasn’t until recently that this model, where suppliers direct traffic to maximize EPC, became a real challenge for researchers. Now in a supply crisis for participants (especially thoughtful ones!), those of us who do not optimize our surveys to maintain a high EPC will struggle during fieldwork.

    While increasing the CPI may solve the problem in some cases, it is often not enough in a world where participant demand far exceeds supply. A more sustainable strategy is to focus on improving survey conversion: the higher your conversion, the higher the EPC, the more traffic you’ll get.

    Iterate to optimize

    The dynamic nature of the marketplace means that survey design should be more iterative than it has been in the past. Gone are the days when you could launch a survey, then sit back and relax for a few days until the end of the fieldwork.

    Instead, we now have an opportunity to inspect and incrementally adapt our survey design. Because we work with software and not paper questionnaires anymore, we can gauge the performance of a survey early and make small changes that can improve the EPC. Participants are constantly giving us feedback about our surveys and it is to our own detriment to ignore their voices and rigidly carry on with a sub-optimal survey.

    How to improve survey metrics

    Below, find four ways to enhance your surveys.

    1) Plan to iterate on your design

    Designing surveys optimized for the marketplace starts with being aware of your survey metrics during fieldwork. Listen to what the metrics, and your participants, are telling you. Expect to iterate on your original design.

    The “soft launch” is an opportunity to assess the health of the survey metrics early, and proactively improve your design before your conversion rate falls off a cliff.

    2) Optimize the length of interview

    There are two things displayed on a panelist’s dashboard that inform their decision of whether or not to participate: the incentive (e.g. points), and the LOI. Putting yourself in the panelist’s shoes, how likely are you to choose to complete a survey with an LOI longer than 20 minutes? Shorter LOIs not only encourage a healthy flow of traffic to your survey, but they will also likely yield more thoughtful responses.

    In addition, it’s important to think about the LOI for people who do not qualify for your study. It is tempting to include all sorts of sizing questions in the screener, but because the LOI of your screened-out participants is also monitored, a long screener (i.e. 5+ minutes) will negatively impact the overall performance of your survey.

    If you suspect your LOI is discouraging participants, you can incrementally iterate on your design: consider hiding low-scoring attributes or removing those ‘on-the-fence’questions to quickly shorten the survey.

    3) Lower the drop off rate

    Aside from a short LOI and making a survey mobile-friendly, survey engagement is key to preventing drop-off. If your participants are giving you feedback that leads you to believe that a survey is too bland, consider “story-fying” your survey. Story-fying means giving your survey more of a storybook feel with an enticing landing page, a pleasant user interface, illustrations, a conversational tone, and a creative and engaging sense of progression.

    After the soft launch, review the questions that result in the highest drop-off rate, then make small adjustments to improve the experience. Can you adjust the language to be more friendly and engaging? Can you change the question format to make it more digestible?

    4) Improve the incidence rate

    After the soft launch, you should have a sense of where the incidence rate stands. Knowing that a low IR could potentially slow traffic to your survey, it’s wise to look at where people are screening out and consider relaxing some of the less-important screener criteria, if you can.

    Furthermore, examine the IR by quota group. This will give you an idea of how difficult fieldwork will be near the end, when the most difficult quotas are left open to fill. Will you be looking for a needle in a haystack?

    5) Lower the reconciliation rate

    Everyone in the industry agrees that we absolutely cannot compromise when it comes to data quality. Yet, what qualifies as “good” data remains subjective, a theoretical gray area. By extension, reconciliation is also rather a subjective exercise.

    Generally, to prevent the completes you toss out from impacting your conversion, consider using a “quality control redirect”,  which lets your sample supplier know in real-time that it was the participant’s fault for termination.

    Incrementally adapt, or perish

    Survey metrics are more transparent than ever to both sample suppliers and panelists. Let’s pay attention! These metrics are communicating important details about how participants feel about your survey. We ought to listen to the data, listen to the participants, and course-correct our surveys to optimize EPC.

    The opportunity is before us to make adjustments during fieldwork to establish and maintain healthy survey metrics. Designing surveys that are optimized for modern sampling technologies not only benefits researchers, but also improves the participant experience and the participation rate. It’s the little things: small, incremental iterations that can result in huge, positive impacts.

    Author: Karine Pepin
    Source: Greenbook
  • Questionnaire design: garbage in, garbage out

    Questionnaire design: garbage in, garbage out

    One of the most useful ways to collect data when conducting market research is via the use of HUMINT (Human Intelligence). Data can be collected via in-depth qualitative interviews or large scale quantitative surveys. In this blog, the focus is on the latter. Quantitative surveys are a great method to gather representative information about a specific target group.

    In order to obtain valuable input for analysis you need a well-designed questionnaire. If your questionnaire isn’t well designed, you will have trouble getting valuable results out of it: ‘garbage in, garbage out’. This blog will provide some guidance on how to prevent garbage input (and thus garbage output) when designing a questionnaire for a quantitative survey.

    ‍Research purpose and KPI’s

    When you plan to conduct a quantitative survey, you need a clear view on what you want to achieve after collecting and analyzing the results: the purpose of your survey. Ask yourself the following questions during the questionnaire design:

    • What is the intended use of the insights resulting from the survey? What should the survey results clarify to make a substantiated decision?

    The answer to these questions will not result into survey questions yet, but rather into measurable Key Performance Indicators (KPI’s), e.g.: market share, brand awareness, buying criteria, satisfaction etc. Also, keep in mind that if you want to show the final results in a data visualization tool (e.g. PowerBI, Tableau) that the KPI’s are easy to visualize and to understand for the user.

    ‍Creating key questions

    Once you have a clear purpose to conduct a survey and awareness which KPI’s you need to measure, it is time to create the key questions. These questions provide a result for the sample of respondents. While creating questions there will probably pop up all kinds of secondary questions. It is vital to clearly distinguish the key questions measuring your KPI’s from the ‘nice to know’ questions to position the key questions in such a way that they can be answered unbiased. From those ‘nice to know’ questions, only keep those questions that really add value.

    A longer questionnaire leads to distracted, less motivated respondents.

    ‍Respondent profile

    A key aspect of analysis of a quantitative survey is segmentation. When you analyze the results of the survey, it is beneficial to segment your sample to identify significant differences between types of respondents. This is why your questionnaire should start with questions that create a respondent profile. Obvious questions that come to mind are demographics like age and gender, but it is important not to miss out on any indicator where a segmentation of the sample could be useful.

    If we would for example conduct a survey among 100 CEO’s of European companies, it could be useful to segment the CEO’s by personal indicators like how long they have been in the position of CEO, how long they have worked at that particular company, what other jobs they have had etc. It could also be useful to segment the companies they work for on indicators like company size, the sector they are active in, whether they operate B2B or B2C etc.

    ‍Crystal clear questions

    One thing that should be present throughout the entire questionnaire is clarity for the respondent. Be sure that it is 100% clear what you mean to the respondent, every single question. You can achieve this by providing context with text, images or video for any question or term that may possibly be interpreted in multiple ways

    A good way to provide context about specific terms is by adding a sentence right after a question where you have used a difficult or multi-interpretable term. For instance if you ask the open question: which products of term X can you name? You should follow with: By term X we mean A, B and C, excluding X, Y and Z.

    Besides providing clarity about definitions, it is also important to provide clarity on what is expected from the respondent. For instance: ranking buying criteria on importance on a scale of 1-  5, answering with a number, multiple answers being possible etc. Make this explicit! It will not only provide clarity to the respondent but also make life easier once you analyze the results.

    A proper routing will also provide clarity to the respondent and make the analysis easier. Think thoroughly about which answers will lead to skipping or jumping to other questions. If I ask the 100 CEO’s whether they have heard of Hammer and I want to follow up with a question about their perception of Hammer, I should be sure that the respondents who have never heard of Hammer skip the follow-up question about perception!

    ‍Preventing bias

    In order to get valid results out of your survey, it is key to have your respondents answer questions as unbiased as possible. The two most common ways bias can sneak into your questionnaire are by:

    • Loaded wording: when you ask your respondents about sentiment it is key to ask the question as neutral as possible. Avoid adding loaded words about your topic. A question like: ‘How beautiful do you find logo X?’ is a no go for instance. Instead, you could ask ‘What is your opinion about logo X?’ Rate from 1 – not beautiful at all to 5 – very beautiful
    • Order of text and questions: quite often in surveys questions are asked about a topic that has already had attention earlier. Be sure to avoid giving any information that sends the respondent in a certain direction about a topic you will ask later on.

    ‍Questionnaire feedback

    Last, but not least, be sure to always obtain feedback before approaching respondents. No matter how carefully you designed your questionnaire, it is very common to overlook minor mistakes that are easily made. Secondly, it is important to gain feedback from a content perspective. Are all KPI’s properly measured? Are there any questions missing? Or are there questions in you questionnaire that should be removed? It is best to have your draft questionnaire checked by three types of people:

    • Stakeholders: those people who benefit from a successful survey. At Hammer, when we conduct a survey assigned by a client, we align the questionnaire narrowly with their demands. In the end, they have to make decisions supported by the results of the survey.
    • Experts: those people who have the knowledge and experience to improve your questionnaire by viewing it from their perspective. This can be either experts on the topic or experts on the research method. If I want to conduct a survey among dairy farmers for instance, I will ask both a dairy/agriculture expert from our network and one of my experienced co-workers with market research knowhow.
    • People part of the target group: asking feedback from someone who fits the criteria of your target group is invaluable. This is a great check to find out if the questionnaire is completely clear to respondents. Running the questionnaire by a potential respondent can also be used as a pilot to see whether the types of answers that come out of it are analyzable.

    ‍Designing a solid questionnaire can easily be underestimated. Make sure your research objective is well defined, as well as the responding KPI's. Make the questionnaire as clear as possible for the respondent, prevent bias and ask for feedback from different types of stakeholders.

    Author: Jasper Reintjens

    Source: Hammer, Market Intelligence

  • The Real Business Intelligence Trends in 2022  

    The Real Business Intelligence Trends in 2022

    Many companies are still adapting to changed requirements due to the COVID-19 pandemic. Although the situation now seems less acute and more long-term changes toward a ‘new normal’ are on the horizon, day-to-day business is far from settled. Some companies are dealing with last year’s decline in orders, while others are coping with the ongoing supply chain disruptions or are still in the midst of adapting their business model to the changed requirements or better equipping themselves for possible future crises.

    A look at this year’s business intelligence trends reveals that companies are still working to position themselves well for the long term and are working on the foundation of their data usage. Instead, companies are addressing the root causes of their challenges (e.g., data quality) and also tackling the holistic establishment of a data-driven culture.

    The BARC Data, BI and Analytics Trend Monitor 2022 illustrates which trends are currently regarded as important in addressing these challenges by a broad group of BI and analytics professionals. Their responses provide a comprehensive picture of regional, company and industry specific differences and offer insights into developments in the BI market and the future of BI.

    Our long-term comparisons also show how trends have developed, making it possible to separate hype from stable trends. BARC’s Data, BI and Analytics Trend Monitor 2022 reflects on the business intelligence, analytics and data management trends currently driving the market from a user perspective.

    The Most (and Least) Important Business Intelligence Trends in 2022

    We asked 2,396 users, consultants and vendors for their views on the most important BI, analytics and data management trends, delivering an up-to-date perspective on regional, company and industry-specific differences and providing comprehensive insights on the BI, analytics and data management market.

    Data quality/master data management, data-driven culture and data governance are the three topics that practitioners identified as the most important trends in their work.

    At the other end of the spectrum, mobile BI, augmented analytics and IoT data and analytics were voted as the least important of the twenty trends covered in BARC’s survey.

    Master data and data quality management in first position has retained this ranking over the last five years while the second most important trend, establishing a data-driven culture, has steadily increased in importance. 

    The significance of these two topics transcends individual regions and industry sectors. Establishing a data-driven culture is a trend that was newly introduced to the BARC Trend Monitor three years ago. Starting from fifth position in the first edition, it made its way up to third place in the last two years and is now ranked number two. 

    Data governance has also increased in importance. Having held down fourth position for several years, it rose to number three this year. Data discovery and visualization and self-service analytics (ranked four and five respectively) have been equally consistent trends, but both have now taken a back seat to data-driven culture.

    Our View on the Results

    Master data and data quality management in first position  has been ranked as the most important trend for five years in a row now. The stability of this trend shows the relevance of having good quality data to be significantly higher than other trend topics with a much broader presence in the media. It also reflects the fact that many organizations place high emphasis on their master data and data quality management because they have not reached their goals yet.

    This is in line with findings of other BARC Surveys that repeatedly show that companies are constantly battling with insufficient data quality as a hurdle to making better use of data. Hence, master data and data quality management will remain very important and is also linked to the equally stable significance of data governance, which was ranked in fourth position for four consecutive years before climbing to third place this year.

    Establishing a data-driven culture has increased in importance and is now ranked as the second most important trend. Since its introduction to the Trend Monitor in 2019, this trend has always ranked among the top five and is constantly gaining in prominence. This can be explained by the rising awareness that fostering a data-driven culture is vital to realizing the full data potential of a company. 

    Data discovery and data visualization and self-service BI have slipped down the rankings slightly this year. However, being ranked four and five in our list of 20 topics underlines their importance to organizations. All the top trends combine organizational and technological elements. They act as a solid foundation on which most companies are keen to put great emphasis.

    The top five trends represent the foundation for organizations to manage their own data and make good use of it. Furthermore, they demonstrate that organizations are aware of the relevance of high quality data and its effective use. Organizations want to go beyond the collection of as much data as possible and actively use data to improve their decision making processes. This is also supported by data warehouse modernization, which holds on to sixth position this year. 

    Some trends have slightly increased in importance since last year (e.g., data catalogs and alerting). However, most have stayed the same or just changed one rank.

    There are some major shifts in the downward trends. Data preparation by business users dropped from rank seven to rank ten due to alerting and agile BI development climbing the rankings. Mobile BI also fell three places to rank eighteen. In this case, a continuous downward trend can be observed over the last four years.

    Source: Business Application Research Center (BARC)

  • What to use for your next market research? Online panel or programmatic sample?

    What to use for your next market research? Online panel or programmatic sample?

    Technology is driving growth across industries, creating space for unconventional ideas and technological innovations that infiltrate traditional models and disrupt the status quo. Learn how to decide between a programmatic vs an online panel for your next research study.

    Companies unable to pivot find themselves in the fight of their lives. Peer to peer ride-sharing services such as Uber and Lyft, for example, have wounded the taxi industry, and entertainment streaming services, like Netflix and Hulu, are slowly sending linear TV to an early grave. Technology’s profound impact on the industry, in general, and especially service industries like the online sample industry will only intensify as emerging technologies like AI and machine learning inch toward mass adoption.

    But let’s turn the clock back to the mid-2000s and talk about the technology that upended the way people communicate: social media. Myspace, Hi5, and Facebook, during that time, were in their infancy. Google launches Gmail, strategically providing millions of users with free personal email addresses to access their suite of services, including the now-defunct Google+. What most people don’t realize is that social networks and the easy access to user email addresses changed the market research industry forever, and here’s how.

    In this corner: Old School 'Online Panel' 

    Online panels are communities of individuals sourced by sample providers to take online surveys. This business model is cost-effective for market researchers as thousands of people can be interviewed in a fraction of the time for a fraction of the cost. Social media provides an alternative way to collect a sample by accessing broader online communities.

    There are many benefits to traditional online panels. Sample providers can create more niche communities because they have more control over who enters the community. By undergoing a series of quality control checks, it can be easily determined if the respondent is a good fit. If the potential respondent passes muster, they are accepted into the panel. The most effective panels provide live customer service to respondents, typically via phone with a real person on the other end to answer the respondent’s questions. In successful cases, the panelist moves from the actual panel to the best match survey (double opt-in), and usually takes about 5-10 surveys per year. This process improves data quality.

    The problem with traditional online panels is keeping up with demand. The sample company must have bid managers and project managers available to process client requests 24 hours a day, which erodes the cost savings previously mentioned.

    Now, let’s fast forward to the mid-2010s.

    In this corner: New Age 'Programmatic Sample'

    By this time, the internet is faster, WIFI is ubiquitous, and artificial intelligence is making our devices smarter. Technology is cheaper and more accessible, and the term 'programmatic' embeds itself in industry vernacular. A new business model emerges for the online sample industry called programmatic sampling composed of routers and APIs, which have become the preferred business model among sample companies.

    A programmatic sample is a platform that connects to and tracks user behavior across multiple internet mediums, from websites to social media, affiliate sites to online panels, data is captured by an API (Applied Programmatic Integration). This new technology aggregates millions of people from the internet and places them at the ready for the next available online survey with a few clicks of a mouse.

    The benefits of a programmatic sample are straight forward. They can run 24 hours a day, seven days a week, 365 days per year all over the world with minimal or no intervention from project managers or bids mangers.

    But programmatic has its challenges as well. The interaction between the respondents is greatly diminished or disappears entirely, and the respondents are given a blanket label of 'online traffic'. The online traffic that enters the routers is less controlled, and the respondent’s survey experience is less than ideal. They encounter multiple survey windows before they get to the actual survey. The double opt-in model is lost, and it becomes more of a quintuple opt-in model, which some respondents don’t like. Due to this, most respondents in this model will only take one or two surveys in a lifetime and then leave the platform forever.

    However, the router will keep the wheels turning and consistently generate new online traffic (online respondents). Like a Ferris Wheel, people hop on and hop off, but programmatic sample always keeps adding new people and continues to do so until it is turned off.

    And the winner is?

    It’s hard to say but it favors a tie. Some believe that traditional online panels are dying due to the high cost of maintaining them. However, this model offers better data quality and access to niche sample sectors like U.S. Hispanics or other minority groups. The issue is speed and sample volume, which the programmatic sample model excels in. Perhaps the best bet is for sample companies to provide a hybrid of both models. But even then, as technology evolves and clients feel more pressure to produce relevant marketing campaigns that generate positive ROI, programmatic sampling still has the advantage. It will be interesting to watch how technology transforms the sample industry over the next ten or so years.

    Author: Art Padilla

    Source: Greenbook Blog

EasyTagCloud v2.8