6 items tagged "data analysis"

  • 5 best practices on collecting competitive intelligence data

    5 best practices on collecting competitive intelligence data

    Competitive intelligence data collection is a challenge. In fact, according to our survey of more than 1,000 CI professionals, it’s the toughest part of the job. On average, it takes up one-third of all time spent on the CI process (the other two parts of the process being analysis and activation).

    A consistent stream of sound competitive data—i.e., data that’s up-to-date, reliable, and actionable—is foundational to your long-term success in a crowded market. In the absence of sound data, your CI program will not only prove ineffective—it may even prove detrimental.

    By the time you’re done reading, you’ll have an answer to each of the following:

    • Why is gathering competitive intelligence difficult?
    • What needs to be done before gathering competitive intelligence?
    • How can you gather competitive intelligence successfully?

    Let’s begin!

    Why is gathering competitive intelligence difficult?

    It’s worth taking a minute to consider why gathering intel is the biggest roadblock encountered by CI pros today. At the risk of oversimplifying, we’ll quickly discuss two explanations (which are closely related to one another): bandwidth and volume.

    Bandwidth

    CI headcount is growing with each passing year, but roughly 30% of teams consist of two or fewer dedicated professionals. 7% of teams consist of half a person—meaning a single employee spends some of their time on CI—and another 6% of businesses have no CI headcount at all.

    When the responsibility of gathering intel falls on the shoulders of just one or two people—who may very well have full-time jobs on top of CI—data collection is going to prove difficult. For now, bandwidth limitations help to explain why the initial part of the CI process poses such a significant challenge.

    Volume

    With the modern internet age has come an explosion in competitive data. Businesses’ digital footprints are far bigger than they were just a few years ago; there’s never been more opportunity for competitive research and analysis.

    Although this is an unambiguously good thing—case in point: it’s opened the door for democratized, software-driven competitive intelligence—there’s no denying that the sheer volume of intel makes it difficult to gather everything you need. And, obviously, the challenges of ballooning data are going to be compounded by the challenges of limited bandwidth.

    Key steps before gathering competitive intelligence

    Admittedly, referring to the collection of intel as the initial part of the CI process is slightly misleading. Before you dedicate hours of your time to visiting competitors’ websites, scrutinizing online reviews, reviewing sales calls, and the like, it’s imperative that you establish priorities.

    What do you and your stakeholders hope to achieve as a result of your efforts? Who are your competitors, and which ones are more or less important? What kinds of data do you want to collect, and which ones are more or less important?

    Nailing down answers to these questions—and others like them—is a critical prerequisite to gathering competitive intelligence.

    Setting goals with your CI stakeholders

    The competitors you track and the types of intel you gather will be determined, in part, by the specific CI goals towards which you and your stakeholders are working.

    Although it’s true that, at the end of the day, practically everyone is working towards a healthier bottom line and greater market share, different stakeholders have different ways of contributing to those common objectives. It follows, then, that different stakeholders have different needs from a competitive intelligence perspective.

    Generally speaking:

    • Sales reps want to win competitive deals.
    • Marketers want to create differentiated positioning.
    • Product managers want to create differentiated roadmaps.
    • Customer support reps want to improve retention against competitors.
    • Executive leaders want to mitigate risk and build long-term competitive advantage.

    Depending on the size of your organization and the maturity of your CI program, it may not be possible to serve each stakeholder to the same extent simultaneously. Before you gather any intel, you’ll need to determine which stakeholders and goals you’ll be focusing on.

    Segmenting & prioritizing your competitors

    With a clear sense of your immediate goals, it’s time to segment your competitive landscape and figure out which competitors are most important for the time being.

    Segmenting your competitive landscape is the two-part job of (1) identifying your competitors and (2) assigning each one to a category. The method you use to segment your competitive landscape is entirely up to you. There’s a number of popular options to choose from, and they can even be layered on top of one another. They include:

    • Direct vs. indirect vs. perceived vs. aspirational competitors
    • Sales competitiveness tiers
    • Company growth stage tiers

    We’ll stick with the first option for now. Whereas a direct competitor is one with which you go head-to-head for sales, an indirect competitor is one that sells a similar product to a different market or a tangential product to the same market. And whereas a perceived competitor is one that—unbeknownst to prospects—offers something completely different from you, an aspirational competitor is one that you admire for the work they’re doing in a related field.

    Once you’ve categorized your competitors, consider your immediate goals and ask yourself, “Given what we’re trying to do here, which competitors require the most attention?” The number of competitors you prioritize largely depends on the breadth of your competitive landscape.

    Identifying & prioritizing types of intel

    One final thing before we discuss best practices for gathering intel: You need to determine the specific types of intel that are required to help your stakeholders achieve their goals.

    To put it plainly, the types of intel you need to help sales reps win deals are not necessarily the same types of intel you need to help product managers create differentiated roadmaps. Will there be overlap across stakeholders? Almost certainly. But whereas a sales rep may want two sentences about a specific competitor’s pricing model, a product manager may want a more general perspective on the use cases that are and are not being addressed by other players in the market. In terms of gathering intel, these two situations demand two different approaches.

    It’s also important to recognize the trial-and-error component of this process; it’ll take time to get into a groove with each of your stakeholders. Hopefully, their ongoing feedback will enable you to do a better and better job of collecting the data they need. The more communicative everyone is, the more quickly you’ll get to a place where competitive intelligence is regularly making an impact across the organization.

    5 best practices for gathering competitive intelligence

    Now that we’ve covered all our bases, the rest of today’s guide is dedicated to exploring five best practices for gathering competitive intelligence in a successful, repeatable manner.

    1. Monitor changes to your competitors’ websites

    [According to the State of CI Report, 99% of CI professionals consider their competitors’ websites to be valuable sources of intel. 35% say they’re extremely valuable.]

    You can make extraordinary discoveries by simply monitoring changes on your competitors’ websites. Edits to homepage copy can indicate a change in marketing strategy (e.g., doubling down on a certain audience). Edits to careers page copy can indicate a change in product strategy (e.g., looking for experts in a certain type of engineering). Edits to customer logos can indicate opportunities for your sales team (e.g., when a competitor appears to have lost a valuable account).

    The examples are virtually endless. No matter which specific stakeholders and goals you’re focused on, frequenting your competitors’ websites is a time-tested tactic for gathering intel.

    2. Conduct win/loss analysis

    [According to the State of CI Report, 96% of CI professionals consider win/loss analysis to be a valuable source of intel. 38% say it’s extremely valuable.]

    Although win/loss analysis—the process of determining why deals are won or lost—is a discipline in its own right, it’s often a gold mine of competitive intelligence. The most effective method of collecting win/loss data is interviewing customers (to find out why they bought your solution) and prospects (to find out why they didn’t buy your solution). You’ll find that these conversations naturally yield competitive insights—a customer mentions that your solution is superior in this respect, a prospect mentions that your solution is inferior in that respect, etc.

    Through the aggregation and analysis of your customers’ and prospects’ feedback, you’ll be able to capitalize on some tremendously valuable intel.

    3. Embrace internal knowledge

    [According to the State of CI Report, 99% of CI professionals consider internal knowledge to be a valuable source of intel. 52% say it’s extremely valuable.]

    This may seem counterintuitive, but it’s true: Your stakeholders themselves are amazing sources of competitive intelligence. In fact, as you read above, more than half of CI pros say internal knowledge (a.k.a. field intelligence) is an extremely valuable resource. 

    Sales reps are often speaking with prospects, and marketers, customer support reps, and product managers are often speaking with customers. Across these conversations with external folks, your colleagues learn about your competitors in all kinds of useful ways—product features, pricing models, roadmap priorities, sales tactics, and so on.

    Some of the best ways to gather internal knowledge include listening to calls with prospects and customers, reviewing emails and chat messages, and combing through CRM notes.

    4. Find out what your competitors’ customers are saying

    [According to the State of CI Report, 94% of CI professionals consider their competitors’ customers’ reviews to be valuable sources of intel. 24% say they’re extremely valuable.]

    If you found yourself wondering how one might fill in the gaps between pieces of internal knowledge, look no further: By reading reviews written by your competitors’ customers, you can uncover tons of previously unknown intel.

    And if your initial instinct is to head straight for the scathing reviews, make no mistake—there’s just as much to learn from your competitors’ happy customers as there is from their unhappy customers. Let’s say, for example, that nearly every single positive review for one of your competitors makes mention of a specific feature. This is a critical piece of intel; as long as you’re lacking in this area, your rival will boast a concrete point of differentiation.

    5. Keep your eye on the news

    [According to the State of CI Report, 96% of CI professionals consider news to be a valuable source of intel. 38% say it’s extremely valuable.]

    Product launches, strategic partnerships, industry awards—there’s no shortage of occasions that may land your competitors in the news. Typically, media coverage is the result of a press release and/or other public relations tactics, but that may not always be the case. (In certain industries, media coverage is very common—whether it’s solicited or not.)

    Regardless of why a competitor is in the news, it’s almost always an opportunity to gather intel. In the case of a product or feature launch, you can learn about the positioning they’re trying to establish. In the case of a partnership, you can learn about the kinds of prospects they’re trying to connect with. And in the case of an award, you can learn about the ways in which they’re trying to present themselves to prospects.

    Author: Conor Bond

    Source: Crayon

  • Overcoming data challenges in the financial services sector  

    Overcoming data challenges in the financial services sector

    Importance of the financial services sector

    Financial services industry plays a significant role in global economic growth and development. The sector contributes to the creation of amore efficient flow management of savings and investments and enhance risk management of financial transaction activities for products and services. Institutions such as commercial and investment banks, insurance companies, non-banking financial companies, credit and loan companies, brokerage firms, trust companies offer a wide range of financial services and distribute them in the marketplace. Some of the most common financial services are credits, loans, insurances and leases, distributed directly by insurance companies and banks, or indirectly via agents and brokers.

    Limitations and challenges in data availability

    Due to the important role of financial services in the global economy, it is expected that the financial services market is professional and highly developed, also in terms of data availability. Specifically, a well-designed database is expected to be available, where a wide range of information is presented and can be collected regarding the certain industries. However, reality does not meet these expectations.

    Through assessments of various financial service markets, it has been observed that data collection is a challenging process. Several causes contribute to this situation. Lack of data availability or poor data availability, data opacity, consolidated information from market or annual reports, as well as different categorization schemes of financial services are some of the most significant barriers. Differences in the legal framework among countries have a major impact on the entry and categorization of data. A representative example which applies in this case, is the different classification schemes and categorization of financial services across countries. Specifically, EU countries are obligated to publish data of financial service lines under certain classification scheme and pre-defined classes, which in many cases, differs from the classification schemes or classes of non-EU countries, contributing to an unclear, inaccurate overview of the market. The identification and understanding of each classification scheme are necessary to avoid double counting and overlapped data. In addition, public institutions often publish data, revealing part of the market and not presenting the actual market sizes. Lastly, it has also been observed that some financial services have different definition across countries, which influences the complexity of the data collection and assessment of the financial services market.

    Need for a predictive model

    In order to overcome the challenges of data inconsistency and poor, limited or non-existent data availability and to create an accurate estimation of the financial services market, it is necessary to develop a predictive model which analyzes a wide range of indicators. A characteristic example is the estimation of the global financial services market conducted by The World Bank. An analysis model, based on both derived and measured data information, was created, to address limited data inputs challenges.

    An analysis model for the assessment of the financial services markets, created by Hammer, takes into consideration both, collection of qualitative and quantitative data from several sources as wells as predictive indicators. In previous assessment of the certain financial services markets, data information was collected by publications, articles, reports from public financial services research institutions, country’s financial services associations and association groups and private financial services companies. Field’s experts opinion also constituted a significant source of information. The model included regression and principal component analysis, where derived data were produced based on certain macroeconomic factors (such as country population, GDP, GDP per sector, unemployment rate), trade indicators, economic and political factors.  

    The selection of the indicators and analysis model depends on the type of the financial service product and relative market that we want to assess. In addition, based on model analysis, it is possible to identify and validate correlations between a set of predictive indicators that have been considered as potential key drivers of the specific markets. To conclude with, it is possible to identify the sizes of the financial services markets, with the support of an advanced predictive analysis model which can enable and enhance comparability and consistency of data across different markets and countries.

    Author: Vasiliki Kamilaraki

    Source: Hammer, Market Intelligence

  • Solutions to help you deal with heterogeneous data sources

    Solutions to help you deal with heterogeneous data sources

    With enterprise data pouring in from different sources; CRM systems, web applications, databases, files, etc., streamlining data processes is a significant challenge as it requires integrating heterogeneous data streams. In such a scenario, standardizing data becomes a pre-requisite for effective and accurate data analysis. The absence of the right integration strategy will give rise to application-specific and intradepartmental data silos, which can hinder productivity and delay results.

    Consolidating data from disparate structured, unstructured, and semi-structured sources can be complex. A survey conducted by Gartner revealed that one-third of respondents consider 'integrating multiple data sources' as one of the top four integration challenges.

    Understanding the common issues faced during this process can help enterprises successfully counteract them. Here are three challenges generally faced by organizations when integrating heterogeneous data sources, as well as ways to resolve them:

    Data extraction

    Challenge: Pulling source data is the first step in the integration process. But it can be complicated and time-consuming if data sources have different formats, structures, and types. Moreover, once the data is extracted, it needs to be transformed to make it compatible with the destination system before integration.

    Solution: The best way to go about this is to create a list of sources that your organization deals with regularly. Look for an integration tool that supports extraction from all these sources. Preferably, go with a tool that supports structured, unstructured, and semi-structured sources to simplify and streamline the extraction process.

    Data integrity

    Challenge: Data Quality is a primary concern in every data integration strategy. Poor data quality can be a compounding problem that can affect the entire integration cycle. Processing invalid or incorrect data can lead to faulty analytics, which if passed downstream, can corrupt results.

    Solution: To ensure that correct and accurate data goes into the data pipeline, create a data quality management plan before starting the project. Outlining these steps guarantees that bad data is kept out of every step of the data pipeline, from development to processing.

    Scalability

    Challenge: Data heterogeneity leads to the inflow of data from diverse sources into a unified system, which can ultimately lead to exponential growth in data volume. To tackle this challenge, organizations need to employ a robust integration solution that has the features to handle high volume and disparity in data without compromising on performance.

    Solution: Anticipating the extent of growth in enterprise data can help organizations select the right integration solution that meets their scalability and diversity requirements. Integrating one data point at a time is beneficial in this scenario. Evaluating the value of each data point with respect to the overall integration strategy can help prioritize and plan. Say that an enterprise wants to consolidate data from three different sources: Salesforce, SQL Server, and Excel files. The data within each system can be categorized into unique datasets, such as sales, customer information, and financial data. Prioritizing and integrating these datasets one at a time can help organizations gradually scale data processes.

    Author: Ibrahim Surani

    Source: Dataversity

  • The 4 steps of the big data life cycle

    The 4 steps of the big data life cycle

    Simply put, from the perspective of the life cycle of big data, there are nothing more than four aspects:

    1. Big data collection
    2. Big data preprocessing
    3. Big data storage
    4. Big data analysis

    All above four together constitute the core technology in the big data life cycle.

    Big data collection

    Big data collection is the collection of structured and unstructured massive data from various sources.

    Database collection: Sqoop and ETL are popular, and traditional relational databases MySQL and Oracle still serve as data storage methods for many enterprises. Of course, for the open source Kettle and Talend itself, big data integration content is also integrated, which can realize data synchronization and integration between hdfs, hbase and mainstream Nosq databases.

    Network data collection: A data collection method that uses web crawlers or website public APIs to obtain unstructured or semi-structured data from web pages and unify them into local data.

    File collection: Including real-time file collection and processing technology flume, ELK-based log collection and incremental collection, etc.

    Big data preprocessing

    Big data preprocessing refers to a series of operations such as “cleaning, filling, smoothing, merging, normalization, consistency check” and other operations on the collected raw data before data analysis, in order to improve the data Quality lays the foundation for later analysis work. Data preprocessing mainly includes four parts

    1. Data cleaning
    2. Data integration
    3. Data conversion
    4. Data specification

    Data cleaning refers to the use of cleaning tools such as ETL to deal with missing data (missing attributes of interest), noisy data (errors in the data, or data that deviates from expected values), and inconsistent data.

    Data integration refers to the consolidation and storage of data from different data sources in a unified database. The storage method focuses on solving three problems: pattern matching, data redundancy, and data value conflict detection and processing.

    Data conversion refers to the process of processing the inconsistencies in the extracted data. It also includes data cleaning, that is, cleaning abnormal data according to business rules to ensure the accuracy of subsequent analysis results.

    Data specification refers to the operation of minimizing the amount of data to obtain a smaller data set on the basis of keeping the original appearance of the data to the maximum extent, including: data party aggregation, dimension specification, data compression, numerical specification, concept layering, etc.

    Big data storage

    Big data storage refers to the process of using memory to store the collected data in the form of a database in three typical routes:

    New database cluster based on MPP architecture: Using Shared Nothing architecture, combined with the efficient distributed computing model of MPP architecture, through column storage, coarse-grained indexing and other big data processing technologies, the focus is on data storage methods developed for industry big data. With the characteristics of low cost, high performance, high scalability, etc., it has a wide range of applications in the field of enterprise analysis applications.

    Compared with traditional databases, its PB-level data analysis capabilities based on MPP products have significant advantages. Naturally, MPP database has also become the best choice for a new generation of enterprise data warehouse.

    Technology expansion and packaging based on Hadoop: Hadoop-based technology expansion and encapsulation is aimed at data and scenarios that are difficult to process with traditional relational databases (for storage and calculation of unstructured data, etc.), using Hadoop open source advantages and related features (good at handling unstructured and semi-structured data), Complex ETL processes, complex data mining and calculation models the process of deriving relevant big data technology.

    With the advancement of technology, its application scenarios will gradually expand. The most typical application scenario at present is to support the Internet big data storage and analysis by expanding and encapsulating Hadoop, involving dozens of NoSQL technologies.

    Big data all-in-one: This is a combination of software and hardware designed for the analysis and processing of big data. It consists of a set of integrated servers, storage devices, operating systems, database management systems, and pre-installed and optimized software for data query, processing, and analysis. It has good stability and vertical scalability.

    Big data analysis and mining

    From visual analysis, data mining algorithms, predictive analysis, semantic engine, data quality management, etc., the process of extracting, refining and analyzing the chaotic data.

    Visual analysis: Visual analysis refers to an analysis method that clearly and effectively conveys and communicates information with the aid of graphical means. Mainly used in massive data association analysis, that is, with the help of a visual data analysis platform, the process of performing association analysis on dispersed heterogeneous data and making a complete analysis chart. It is simple, clear, intuitive and easy to accept.

    Data mining algorithm: Data mining algorithms are data analysis methods that test and calculate data by creating data mining models. It is the theoretical core of big data analysis.

    There are various data mining algorithms, and different algorithms show different data characteristics due to different data types and formats. But generally speaking, the process of creating a model is similar, that is, first analyze the data provided by the user, then search for specific types of patterns and trends, and use the analysis results to define the best parameters for creating a mining model, and apply these parameters In the entire data set to extract feasible patterns and detailed statistics.

    Data quality management refers to the identification, measurement, monitoring, and early warning of various data quality problems that may be caused in each stage of the data life cycle (planning, acquisition, storage, sharing, maintenance, application, extinction, etc.) to improve data A series of quality management activities.

    Predictive analysis: Predictive analysis is one of the most important application areas of big data analysis. It combines a variety of advanced analysis functions (special statistical analysis, predictive modeling, data mining, text analysis, entity analysis, optimization, real-time scoring, machine learning, etc.), to achieve the purpose of predicting uncertain events.

    Help users analyze trends, patterns, and relationships in structured and unstructured data, and use these indicators to predict future events and provide a basis for taking measures.

    Semantic Engine: Semantic engine refers to the operation of adding semantics to existing data to improve users’ Internet search experience.

    Author: Sajjad Hussain

    Source: Medium

     

  • Two modern-day shifts in market research

    Two modern-day shifts in Market Research

    In an industry that is changing tremendously, traditional ways of doing things will no longer suffice. Timelines are shortening, as demands for faster and faster insights increase, and, in addition, we are seeking these insights in such a vast sea of data. The only way to address the combination of these two issues is with technology.

    The human-machine relationship

    One good example of this shift is the whole arena surrounding computational text analysis. Smarter, artificial intelligence (AI)-based approaches are completely changing the way we approach this task. In the past, the human-based analysis only allowed us to skim the text, use a small sample and analyze it with subjective bias. This kind of generalized approach is being replaced by a computational methodology that incorporates all the text while throwing away what the computer views as non-essential information. Sometimes, without the right program, much of the meaning can be lost. However, this machine-based approach can work with large amounts of data quickly.

    When we start to dive deeper into AI-based solutions, we see that technology can shoulder much of the hard work to free up humans to do what we can do better. What the machine does really well is finding the data points that can help us tell a better, richer story. It can run algorithms and find patterns in natural language, taking care of the heavy lifting. Then the human can come in, add color and apply sensible intelligence to the data. This human-machine tension is something I predict that we’ll continue to see as we accommodate our new reality. The end goal is to make the machine as smart as possible to really leverage our own limited time in the best ways possible.

    Advanced statistical analysis

    Another big change taking place surrounds the statistical underpinnings we use for analysis. Traditionally we have found things out by using the humble crosstab tool. But if we truly want to understand what’s driving, for example, differences between groups, it is simply not efficient to go through crosstab after crosstab. It is much better to have the machine do it for you and reveal just the differences that matter. When you do that, though, classical statistics break down because false positives become statistically inevitable.

    Bayesian statistics do not suffer this same problem when a high volume of tests are required. In short, a Bayesian approach allows researchers to test a hypothesis and see if it holds given the data, rather than the more commonly used tests for significance which test that the data is right in the face of a given hypothesis.

    There are a host of other models that are changing the way we approach our daily jobs in market research. New tools, some of them based in a completely different set of underlying principles (like Bayesian statistics), are giving us new opportunities. With all these opportunities, we are challenged to work in a new set of circumstances and learn to navigate a new reality.

    We can’t afford to wait any longer to change the way we are doing things. The industry and our clients’ industries are moving too quickly for us to hesitate. I encourage researchers to embrace this new paradigm so that they will have the skill advantage. Try new tools, even if you don’t understand how they work, many of them can help you do what you do (better). Doing things in new ways can lead to better, faster insights. Go for it!

    Author: Geoff Lowe

    Source: Greenbook Blog

  • Using Hierarchical Clustering in data analysis

    Using Hierarchical Clustering in data analysis

    This article discusses the analytical method of Hierarchical Clustering and how it can be used within an organization for analytical purposes.

    What is Hierarchical Clustering?

    Hierarchical Clustering is a process by which objects are classified into a number of groups so that they are as much dissimilar as possible from one group to another group and as much similar as possible within each group.

    For example, if you want to create four groups of items, these items  should be as similar as possible in terms of attributes of the items in each group, and items in group 1 and group 2 should be as dissimilar as possible. All items start in one cluster, and are then divided into two clusters. The data points within one cluster are as similar as possible, and the data points in other clusters are dissimilar from the other clusters being analyzed. For each cluster, we repeat the process until the specified number of clusters is reached (four in this case).

    This type of analysis can be applied to segment customers by purchase history, segment users by the types of activities they perform on websites or applications, to develop personalized consumer profiles based on activities or interests, and to recognize market segments, etc.

    How does an organization use Hierarchical Clustering to analyze data?

    In order to understand the application of Hierarchical Clustering for organizational analysis, let us consider two use cases.

    Use case one

    Business problem: A bank wants to group loan applicants into high/medium/low risk based on attributes such as loan amount, monthly installments, employment tenure, the number of times the applicant has been delinquent in other payments, annual income, debt to income ratio etc.

    Business benefit: Once the segments are identified, the bank will have a loan applicants’ dataset with each applicant labeled as high/medium/low risk. Based on these labels, the bank can easily make a decision on whether to give loan to an applicant and how much credit to extend, as well as the interest rate the applicant will be given, based on the amount of risk involved.

    Use case two

    Business problem: The enterprise wishes to organize customers into groups/segments based on similar traits, product preferences and expectations. Segments are constructed based on customer demographic characteristics, psychographics, past behavior and product use behavior.

    Business benefit: Once the segments are identified, marketing messages and products can be customized for each segment. The better the segment(s) chosen for targeting by a particular organization, the more successful the business will be in the market.

    Hierarchical Clustering can help an enterprise organize data into groups to identify similarities and, equally important, dissimilar groups and characteristics, so that the business can target pricing, products, services, marketing messages and more.

    Author: Kartik Patel

    Source: Dataversity

EasyTagCloud v2.8