4 items tagged "data collection"

  • 5 best practices on collecting competitive intelligence data

    5 best practices on collecting competitive intelligence data

    Competitive intelligence data collection is a challenge. In fact, according to our survey of more than 1,000 CI professionals, it’s the toughest part of the job. On average, it takes up one-third of all time spent on the CI process (the other two parts of the process being analysis and activation).

    A consistent stream of sound competitive data—i.e., data that’s up-to-date, reliable, and actionable—is foundational to your long-term success in a crowded market. In the absence of sound data, your CI program will not only prove ineffective—it may even prove detrimental.

    By the time you’re done reading, you’ll have an answer to each of the following:

    • Why is gathering competitive intelligence difficult?
    • What needs to be done before gathering competitive intelligence?
    • How can you gather competitive intelligence successfully?

    Let’s begin!

    Why is gathering competitive intelligence difficult?

    It’s worth taking a minute to consider why gathering intel is the biggest roadblock encountered by CI pros today. At the risk of oversimplifying, we’ll quickly discuss two explanations (which are closely related to one another): bandwidth and volume.

    Bandwidth

    CI headcount is growing with each passing year, but roughly 30% of teams consist of two or fewer dedicated professionals. 7% of teams consist of half a person—meaning a single employee spends some of their time on CI—and another 6% of businesses have no CI headcount at all.

    When the responsibility of gathering intel falls on the shoulders of just one or two people—who may very well have full-time jobs on top of CI—data collection is going to prove difficult. For now, bandwidth limitations help to explain why the initial part of the CI process poses such a significant challenge.

    Volume

    With the modern internet age has come an explosion in competitive data. Businesses’ digital footprints are far bigger than they were just a few years ago; there’s never been more opportunity for competitive research and analysis.

    Although this is an unambiguously good thing—case in point: it’s opened the door for democratized, software-driven competitive intelligence—there’s no denying that the sheer volume of intel makes it difficult to gather everything you need. And, obviously, the challenges of ballooning data are going to be compounded by the challenges of limited bandwidth.

    Key steps before gathering competitive intelligence

    Admittedly, referring to the collection of intel as the initial part of the CI process is slightly misleading. Before you dedicate hours of your time to visiting competitors’ websites, scrutinizing online reviews, reviewing sales calls, and the like, it’s imperative that you establish priorities.

    What do you and your stakeholders hope to achieve as a result of your efforts? Who are your competitors, and which ones are more or less important? What kinds of data do you want to collect, and which ones are more or less important?

    Nailing down answers to these questions—and others like them—is a critical prerequisite to gathering competitive intelligence.

    Setting goals with your CI stakeholders

    The competitors you track and the types of intel you gather will be determined, in part, by the specific CI goals towards which you and your stakeholders are working.

    Although it’s true that, at the end of the day, practically everyone is working towards a healthier bottom line and greater market share, different stakeholders have different ways of contributing to those common objectives. It follows, then, that different stakeholders have different needs from a competitive intelligence perspective.

    Generally speaking:

    • Sales reps want to win competitive deals.
    • Marketers want to create differentiated positioning.
    • Product managers want to create differentiated roadmaps.
    • Customer support reps want to improve retention against competitors.
    • Executive leaders want to mitigate risk and build long-term competitive advantage.

    Depending on the size of your organization and the maturity of your CI program, it may not be possible to serve each stakeholder to the same extent simultaneously. Before you gather any intel, you’ll need to determine which stakeholders and goals you’ll be focusing on.

    Segmenting & prioritizing your competitors

    With a clear sense of your immediate goals, it’s time to segment your competitive landscape and figure out which competitors are most important for the time being.

    Segmenting your competitive landscape is the two-part job of (1) identifying your competitors and (2) assigning each one to a category. The method you use to segment your competitive landscape is entirely up to you. There’s a number of popular options to choose from, and they can even be layered on top of one another. They include:

    • Direct vs. indirect vs. perceived vs. aspirational competitors
    • Sales competitiveness tiers
    • Company growth stage tiers

    We’ll stick with the first option for now. Whereas a direct competitor is one with which you go head-to-head for sales, an indirect competitor is one that sells a similar product to a different market or a tangential product to the same market. And whereas a perceived competitor is one that—unbeknownst to prospects—offers something completely different from you, an aspirational competitor is one that you admire for the work they’re doing in a related field.

    Once you’ve categorized your competitors, consider your immediate goals and ask yourself, “Given what we’re trying to do here, which competitors require the most attention?” The number of competitors you prioritize largely depends on the breadth of your competitive landscape.

    Identifying & prioritizing types of intel

    One final thing before we discuss best practices for gathering intel: You need to determine the specific types of intel that are required to help your stakeholders achieve their goals.

    To put it plainly, the types of intel you need to help sales reps win deals are not necessarily the same types of intel you need to help product managers create differentiated roadmaps. Will there be overlap across stakeholders? Almost certainly. But whereas a sales rep may want two sentences about a specific competitor’s pricing model, a product manager may want a more general perspective on the use cases that are and are not being addressed by other players in the market. In terms of gathering intel, these two situations demand two different approaches.

    It’s also important to recognize the trial-and-error component of this process; it’ll take time to get into a groove with each of your stakeholders. Hopefully, their ongoing feedback will enable you to do a better and better job of collecting the data they need. The more communicative everyone is, the more quickly you’ll get to a place where competitive intelligence is regularly making an impact across the organization.

    5 best practices for gathering competitive intelligence

    Now that we’ve covered all our bases, the rest of today’s guide is dedicated to exploring five best practices for gathering competitive intelligence in a successful, repeatable manner.

    1. Monitor changes to your competitors’ websites

    [According to the State of CI Report, 99% of CI professionals consider their competitors’ websites to be valuable sources of intel. 35% say they’re extremely valuable.]

    You can make extraordinary discoveries by simply monitoring changes on your competitors’ websites. Edits to homepage copy can indicate a change in marketing strategy (e.g., doubling down on a certain audience). Edits to careers page copy can indicate a change in product strategy (e.g., looking for experts in a certain type of engineering). Edits to customer logos can indicate opportunities for your sales team (e.g., when a competitor appears to have lost a valuable account).

    The examples are virtually endless. No matter which specific stakeholders and goals you’re focused on, frequenting your competitors’ websites is a time-tested tactic for gathering intel.

    2. Conduct win/loss analysis

    [According to the State of CI Report, 96% of CI professionals consider win/loss analysis to be a valuable source of intel. 38% say it’s extremely valuable.]

    Although win/loss analysis—the process of determining why deals are won or lost—is a discipline in its own right, it’s often a gold mine of competitive intelligence. The most effective method of collecting win/loss data is interviewing customers (to find out why they bought your solution) and prospects (to find out why they didn’t buy your solution). You’ll find that these conversations naturally yield competitive insights—a customer mentions that your solution is superior in this respect, a prospect mentions that your solution is inferior in that respect, etc.

    Through the aggregation and analysis of your customers’ and prospects’ feedback, you’ll be able to capitalize on some tremendously valuable intel.

    3. Embrace internal knowledge

    [According to the State of CI Report, 99% of CI professionals consider internal knowledge to be a valuable source of intel. 52% say it’s extremely valuable.]

    This may seem counterintuitive, but it’s true: Your stakeholders themselves are amazing sources of competitive intelligence. In fact, as you read above, more than half of CI pros say internal knowledge (a.k.a. field intelligence) is an extremely valuable resource. 

    Sales reps are often speaking with prospects, and marketers, customer support reps, and product managers are often speaking with customers. Across these conversations with external folks, your colleagues learn about your competitors in all kinds of useful ways—product features, pricing models, roadmap priorities, sales tactics, and so on.

    Some of the best ways to gather internal knowledge include listening to calls with prospects and customers, reviewing emails and chat messages, and combing through CRM notes.

    4. Find out what your competitors’ customers are saying

    [According to the State of CI Report, 94% of CI professionals consider their competitors’ customers’ reviews to be valuable sources of intel. 24% say they’re extremely valuable.]

    If you found yourself wondering how one might fill in the gaps between pieces of internal knowledge, look no further: By reading reviews written by your competitors’ customers, you can uncover tons of previously unknown intel.

    And if your initial instinct is to head straight for the scathing reviews, make no mistake—there’s just as much to learn from your competitors’ happy customers as there is from their unhappy customers. Let’s say, for example, that nearly every single positive review for one of your competitors makes mention of a specific feature. This is a critical piece of intel; as long as you’re lacking in this area, your rival will boast a concrete point of differentiation.

    5. Keep your eye on the news

    [According to the State of CI Report, 96% of CI professionals consider news to be a valuable source of intel. 38% say it’s extremely valuable.]

    Product launches, strategic partnerships, industry awards—there’s no shortage of occasions that may land your competitors in the news. Typically, media coverage is the result of a press release and/or other public relations tactics, but that may not always be the case. (In certain industries, media coverage is very common—whether it’s solicited or not.)

    Regardless of why a competitor is in the news, it’s almost always an opportunity to gather intel. In the case of a product or feature launch, you can learn about the positioning they’re trying to establish. In the case of a partnership, you can learn about the kinds of prospects they’re trying to connect with. And in the case of an award, you can learn about the ways in which they’re trying to present themselves to prospects.

    Author: Conor Bond

    Source: Crayon

  • How to generate relationship intelligence and use it to your advantage

    How to generate relationship intelligence and use it to your advantage

    There’s more to prospect contact data than phone numbers, job titles, and company pain points.

    In your CRM and other communication tools you can find valuable information, known as relationship intelligence, that goes beyond the surface level. 

    Let’s say a current customer forwards one of your product emails to several procurement officers — this could indicate a change in their spending budget. 

    But what if I’m already using sales, lead, or market intelligence? Do I really need to add another type of intelligence to my data strategy?

    Relationship intelligence broadens your outreach potential by connecting the dots that are laid out by other types of intelligence. Using your CRM system (and supplemental data from a provider), you can find new opportunities close to those you’re already working with.

    What is relationship intelligence data?

    Relationship intelligence is a type of data that’s stored in CRM databases, and it’s used to gain new insights on current and potential customers. Data points in relationship intelligence come from company interactions, or in other words, customer-facing communications.

    Companies can use relationship intelligence to append records within their existing database and clean inaccurate data — and possibly create a new organizational chart.

    Relationship intelligence vs. Customer intelligence

    Relationship intelligence creates family tree-like branches between professionals, and at each end are bits of customer intelligence.

    Customer intelligence is customer-centric, while relationship intelligence is connection-centric: 

    Customer intelligence:

    • Phone numbers & email addresses
    • Reporting structure
    • Job titles
    • Social media handles

    Relationship intelligence:

    • Outreach campaign targets
    • Number of support tickets submitted
    • Amount of renewal cycles
    • Social media traffic

    Benefits of relationship intelligence data

    By adding on to existing data about sales, leads, and customers, relationship intelligence helps sales reps and marketers to achieve the following:

    • Reduce the amount of prospect research.
    • Find the right prospects for a deal close.
    • Gain deeper knowledge of their prospects.
    • Personalize their campaign messages and pitches.
    • Reach new buyers before competitors.
    • Improve relationships with current customers.

    Looking at the list above, it’s no wonder that 77% of B2B sales and marketing professionals believe personalized experiences make for better customer relationships. 

    Having a one-size-fits-all approach to using intelligence in sales and marketing efforts can drastically worsen their success, so understanding relationships is an important advantage.

    Tools to build relationship intelligence

    Relationship intelligence tools help fill in the gaps in contact databases and act on new intelligence gained.

    Consider these solutions to build relationship intelligence:

    Customer relationship management (CRM)

    As a staple to any many organizations, a CRM system may have untapped potential. To dig up relationship intelligence, companies can unify customer data storage, integrate with other applications used for customer interactions, and import third-party data.

    Data provider or data collector

    In-house data collection is ideal for saving resources, but is time-consuming if it’s your sole data source. Time spent on data collection and organization takes away from important sales and marketing-oriented tasks.

    If you have room in your budget, data providers can add to your existing database.

    Data visualizer

    Looking over lines and lines of data can put anyone to sleep. And it makes it difficult to create a full, 360-degree view of your leads and customers if you can’t see it. Data visualization tools take your data and create graphs and charts, that let people digest information more easily.

    Email automation

    Your email inbox is where a majority of communications occur. There is so much information to bank on in your emails, such as job titles, phone numbers, events, and company names. 

    When your email system is synced with your CRM, this valuable data can be easily captured and stored for future customer engagement.

    Next steps in building better relationships with intelligence

    Sales and marketing teams can leverage relationship intelligence from professional interactions, such as emails and account management activity. This can improve sales and marketing efforts by going beyond basic contact information.

    Take that valuable information, and put it into your next outreach strategy for more sales opportunities.

    Author: Rayana Barnes

    Source: Zoominfo

  • No Question Research as a solution to common data collection issues

    No Question Research as a solution to common data collection issues

    Research projects are challenging enough, without having to handle issues with respondent fraud. 'No Question Research' utilizes different data sources and provides a solution to handling false responses found in data collection.

    About two years ago, Tony Costella from Heineken posted a compelling piece on the GreenBook Blog, ‘Everybody Lies'. Intriguing and at the same time a clear call to action for all of us, researchers at agency or client-side, to address the fact that people lie, unintentionally or not. We trust people’s claims about their feelings or behavior, whilst they are often clueless. Tony’s message: we should build a toolbox with new approaches and methods, and add behavioral measures to our way of working.

    ‘No Question Research’ – passive and automated data collection; behavioral, transactional or social media data, etc. – provides an answer to many of these challenges and is booming in the past decade. This becomes clear when we look at the revenue spend in the industry. Deep diving into the Global Market Research Report 2019, Ray Poynter has pointed out that ‘No Question Research’ represents already half (39 US$ billion) of the revenue spend in 2018, a steady increase over time (+44% growth vs 2014) and the sole contributor to the growth of our industry.

    ‘No Question Research’ is on the rise and is there to stay. It’s based on pure data streams or observations and in one way or another it is directly derived from real consumer behavior, giving us a vast amount of (passively collected) data. Whether it’s word of mouth, click behavior, shop behavioral measures through sensors, sharing moments in the consumers’ life; letting go frustration in Instagram stories…, and that makes it so valuable. No questions are asked.

    With the speed technology is evolving, more data streams will become available and will be used to tap into people’s behavior in order to learn, understand & predict better. And with this, it’s likely that non-question research and analytics (AI & predictive modeling) will take a more prominent place in the blend of sources that will be used in the data and insights industry.

    Digitizing how people behave & buy

    As an example, look at the online retail industry. There is a consensus about the fact that a unique and engaging customer retail experience is the way brick-and-mortar stores can add value to the consumer journey. Although data analytics have become the norm to uncover the online customer journey, the practice is still in its infancy in the offline world. In the digital space, e-commerce companies use big data and artificial intelligence to predict and influence online customer behavior without the customer even being aware. In the digital space, something as seemingly trivial as changing the color or location of the 'buy' button on a website has a direct impact on sales. Enabling companies to set clear goals and KPI’s on every single element of the digital shop journey. Online, consumer behavior is monitored closely and tracked continuously resulting in a data lake, enabling the creation of algorithms to predict consumer behavior, set clear KPI’s and conduct A/B testing. No questions asked.

    Compared to the digital space, brick-and-mortar retail stores live in the analytical dark ages and rarely consider customer behavior data and metrics, other than sales. So why does physical retail not follow suit? The truth is very few retailers know exactly how the shopper experience in their stores really works and what makes their customers tick.

    Why? Because at best, companies conduct shopper studies to question people about their shop experience, observe them and try to understand how to create impact. But it’s simply impossible for people to tell you exactly how and why they behaved in a certain way. Data collected through sensors is giving an objective answer to that; non-disputable clear objective behavioral data streams.

    At IIeX Europe, I will showcase how sensor data (GDPR compliant) will give insights possible store layout optimizations and merchandise productivity, based on real consumer behavior. On top of this, the technology is used to measure and understand store marketing performances, even labor management, and proactive loss prevention, or sales and traffic predictions through artificial intelligence.

    No Question Research, the holy grail?

    Of course, ‘No Question Research’ is just one of the many elements in the mix. But for sure, there will be a clear elimination of waste (eg nonrelevant researches, questions which people can not answer) and focus on the right blend of methods, tools and data streams. In fact, the deep understanding of the ‘why’ will still be covered by qualitative (question) market research and will alongside the rise of ‘No Question Research’ grow, to make the vast amount of data insightful & relevant.

    Author: Wim Hamaekers

    Source: Greenbook Blog

  • The 4 steps of the big data life cycle

    The 4 steps of the big data life cycle

    Simply put, from the perspective of the life cycle of big data, there are nothing more than four aspects:

    1. Big data collection
    2. Big data preprocessing
    3. Big data storage
    4. Big data analysis

    All above four together constitute the core technology in the big data life cycle.

    Big data collection

    Big data collection is the collection of structured and unstructured massive data from various sources.

    Database collection: Sqoop and ETL are popular, and traditional relational databases MySQL and Oracle still serve as data storage methods for many enterprises. Of course, for the open source Kettle and Talend itself, big data integration content is also integrated, which can realize data synchronization and integration between hdfs, hbase and mainstream Nosq databases.

    Network data collection: A data collection method that uses web crawlers or website public APIs to obtain unstructured or semi-structured data from web pages and unify them into local data.

    File collection: Including real-time file collection and processing technology flume, ELK-based log collection and incremental collection, etc.

    Big data preprocessing

    Big data preprocessing refers to a series of operations such as “cleaning, filling, smoothing, merging, normalization, consistency check” and other operations on the collected raw data before data analysis, in order to improve the data Quality lays the foundation for later analysis work. Data preprocessing mainly includes four parts

    1. Data cleaning
    2. Data integration
    3. Data conversion
    4. Data specification

    Data cleaning refers to the use of cleaning tools such as ETL to deal with missing data (missing attributes of interest), noisy data (errors in the data, or data that deviates from expected values), and inconsistent data.

    Data integration refers to the consolidation and storage of data from different data sources in a unified database. The storage method focuses on solving three problems: pattern matching, data redundancy, and data value conflict detection and processing.

    Data conversion refers to the process of processing the inconsistencies in the extracted data. It also includes data cleaning, that is, cleaning abnormal data according to business rules to ensure the accuracy of subsequent analysis results.

    Data specification refers to the operation of minimizing the amount of data to obtain a smaller data set on the basis of keeping the original appearance of the data to the maximum extent, including: data party aggregation, dimension specification, data compression, numerical specification, concept layering, etc.

    Big data storage

    Big data storage refers to the process of using memory to store the collected data in the form of a database in three typical routes:

    New database cluster based on MPP architecture: Using Shared Nothing architecture, combined with the efficient distributed computing model of MPP architecture, through column storage, coarse-grained indexing and other big data processing technologies, the focus is on data storage methods developed for industry big data. With the characteristics of low cost, high performance, high scalability, etc., it has a wide range of applications in the field of enterprise analysis applications.

    Compared with traditional databases, its PB-level data analysis capabilities based on MPP products have significant advantages. Naturally, MPP database has also become the best choice for a new generation of enterprise data warehouse.

    Technology expansion and packaging based on Hadoop: Hadoop-based technology expansion and encapsulation is aimed at data and scenarios that are difficult to process with traditional relational databases (for storage and calculation of unstructured data, etc.), using Hadoop open source advantages and related features (good at handling unstructured and semi-structured data), Complex ETL processes, complex data mining and calculation models the process of deriving relevant big data technology.

    With the advancement of technology, its application scenarios will gradually expand. The most typical application scenario at present is to support the Internet big data storage and analysis by expanding and encapsulating Hadoop, involving dozens of NoSQL technologies.

    Big data all-in-one: This is a combination of software and hardware designed for the analysis and processing of big data. It consists of a set of integrated servers, storage devices, operating systems, database management systems, and pre-installed and optimized software for data query, processing, and analysis. It has good stability and vertical scalability.

    Big data analysis and mining

    From visual analysis, data mining algorithms, predictive analysis, semantic engine, data quality management, etc., the process of extracting, refining and analyzing the chaotic data.

    Visual analysis: Visual analysis refers to an analysis method that clearly and effectively conveys and communicates information with the aid of graphical means. Mainly used in massive data association analysis, that is, with the help of a visual data analysis platform, the process of performing association analysis on dispersed heterogeneous data and making a complete analysis chart. It is simple, clear, intuitive and easy to accept.

    Data mining algorithm: Data mining algorithms are data analysis methods that test and calculate data by creating data mining models. It is the theoretical core of big data analysis.

    There are various data mining algorithms, and different algorithms show different data characteristics due to different data types and formats. But generally speaking, the process of creating a model is similar, that is, first analyze the data provided by the user, then search for specific types of patterns and trends, and use the analysis results to define the best parameters for creating a mining model, and apply these parameters In the entire data set to extract feasible patterns and detailed statistics.

    Data quality management refers to the identification, measurement, monitoring, and early warning of various data quality problems that may be caused in each stage of the data life cycle (planning, acquisition, storage, sharing, maintenance, application, extinction, etc.) to improve data A series of quality management activities.

    Predictive analysis: Predictive analysis is one of the most important application areas of big data analysis. It combines a variety of advanced analysis functions (special statistical analysis, predictive modeling, data mining, text analysis, entity analysis, optimization, real-time scoring, machine learning, etc.), to achieve the purpose of predicting uncertain events.

    Help users analyze trends, patterns, and relationships in structured and unstructured data, and use these indicators to predict future events and provide a basis for taking measures.

    Semantic Engine: Semantic engine refers to the operation of adding semantics to existing data to improve users’ Internet search experience.

    Author: Sajjad Hussain

    Source: Medium

     

EasyTagCloud v2.8