6 items tagged "data analytics "

  • 4 Trends That Are Driving Business Intelligence Demands

    IM Photo business intelligence fourMany organizations have sung the praises of business intelligence for years, but many of those firms were not actually realizing the full benefits of it. That picture is beginning to change, as advanced analytics tools and techniques mature.

    The result is that 2016 will definitely be the ‘year of action’ that many research firms have predicted when it comes to data analytics. That, at least, is the view of Shawn Rogers, chief research officer at Dell Statistica, who believes “we are at a tipping point with advanced analytics.”

    If Rogers sounds familiar, it may be due to his early connection to Information Management. Rogers was, in fact, the founder of Information Management when it was originally called DM Review Magazine. He is now in his second year as chief research officer for Dell Statistica. “Prior to that I was an industry analyst. I worked for Enterprise Management Associates and I covered the business intelligence, data warehousing and big data space.”

    Rogers believes there are a number of key trends driving business intelligence today that are making it more useful for a greater number of organizations.

    “The maturity in the market has helped everyone evolve to a much more agile and flexible approach to advanced analytics. I think there are four things that are driving that which make it exciting,” Rogers says.

    “One of them is the new sophistication of users,” Rogers notes “Users have become very comfortable with business intelligence. They want advanced insights into their business so they’re starting to look at advanced analytics as that next level of sophistication.”

    “They’re certainly not afraid of it. They want it to be more consumable. They want it to be easier to get to. And they want it to move as fast as they are. The users are certainly making a change in the market,” Rogers says.

    The market is also benefitting from new technologies that are enhancing the capabilities of advanced analytics.

    “It now functions in a way that the enterprise functions,” Rogers explains. “Now the technology allows advanced analytics on all of the data within your environment to work pretty much at the speed of the business.”

    Certainly not insignificant is the economic advantage of more competition from data analytics tool vendors.

    “There are all kinds of solutions out there that are less money. It has opened the door for a much wider group of companies to leverage the data in their enterprise and to leverage advanced analytics,” Rogers observes.

    “Lastly, the data is creating some fun pressure and opportunities. You have all these new data sources like social and things of that nature. But even more importantly we’re able to incorporate all of our data into our analysis,” Rogers says.

    “I know that when I was in the press and as an analyst I use to write a lot about the 80/20 rule of data in the enterprise – the 20 percent we could use and the 80 percent that was too difficult. Now with all these new technologies and their cost benefits we’re not ignoring this data. So we’re able to bring in what use to look like expensive and difficult to manage information, and we’re merging it with more traditional analytics.”

    “If you look at more sophisticated users, and economic advantage, and better technology, and new data, everything is changing,” Rogers says. “I think those four pieces are what are enabling advanced analytics to find a more critical home in the enterprise.”

    Finally, the other key trend driving the need for speed when it comes to analytics and business intelligence return on investment is where those investments are coming from. Increasingly they are not from IT, Rogers stresses.

    “I think there has been a big shift and most of the budgets now seem to be coming from the line of business – sales, marketing, finance, customer service. These are places where we’re seeing budgets fly with data-driven innovation,” Rogers says.

    “When you shift away from the technology side of innovation and move toward the business side, there is always that instant demand for action. I think that saturation of big data solutions, the saturation of analytics tools, and a shift from IT to the business stakeholder standpoint is creating the demand for action over just collecting data,” Rogers concludes.

    Source: Information Management

  • Big Data on the cloud makes economic sense

    With Big Data analytics solutions increasingly being made available to enterprises in the cloud, more and more companies will be able to afford and use them for agility, efficiency and competitiveness

    google
    For almost 10 years, only the biggest of technology firms such as Alphabet Inc.’s Google and Amazon.com Inc.
    used data analytics on a scale that justified the idea of ‘big’ in Big Data. Now more and more firms are
    warming up to the concept. Photo: Bloomberg

    On 27 September, enterprise software company SAP SE completed the acquisition of Altiscale Inc.—a provider of Big Data as-a-Service (BDaaS). The news came close on the heels of data management and analytics company Cloudera Inc. and data and communication services provider CenturyLink Inc. jointly announcing BDaaS services. Another BDaaS vendor, Qubole Inc., said it would offer a big data service solution for the Oracle Cloud Platform.

    These are cases in point of the growing trend to offer big data analytics using a cloud model. Cloud computing allows enterprises to pay for software modules or services used over a network, typically the Internet, on a monthly or periodical basis. It helps firms save relatively larger upfront costs for licences and infrastructure. Big Data analytics solutions enable companies to analyse multiple data sources, especially large data sets, to take more informed decisions.

    According to research firm International Data Corporation (IDC), the global big data technology and services market is expected to grow at a compound annual growth rate (CAGR) of 23.1% over 2014-2019, and annual spending is estimated to reach $48.6 billion in 2019.

    With Big Data analytics solutions increasingly being made available to enterprises in the cloud, more and more companies will be able to afford and use them for agility, efficiency and competitiveness.

    MarketsandMarkets, a research firm, estimates the BDaaS segment will grow from $1.8 billion in 2015 to $7 billion in 2020. There are other, even more optimistic estimates: research firm Technavio, for instance, forecasts this segment to grow at a CAGR of 60% from 2016 to 2020.

    Where does this optimism stem from?

    For almost 10 years, it was only the biggest of technology firms such as Alphabet Inc.’s Google and Amazon.com Inc., that used data analytics on a scale that justified the idea of ‘big’ in Big Data. In industry parlance, three key attributes are often used to understand the concept of Big Data. These are volume, velocity and variety of data—collectively called the 3Vs.

    Increasingly, not just Google and its rivals, but a much wider swathe of enterprises are storing, accessing and analysing a mountain of structured and unstructured data. The trend is necessitated by growing connectivity, falling cost of storage, proliferation of smartphones and huge popularity of social media platforms—enabling data-intensive interactions not only among ‘social friends’ but also among employers and employees, manufacturers and suppliers, retailers and consumers—virtually all sorts of connected communities of people.

    g tech web
     
    A November 2015 IDC report predicts that by 2020, organisations that are able to analyse all relevant data and deliver actionable information will achieve an extra $430 billion in productivity benefits over their less analytically oriented peers.

    The nascent nature of BDaaS, however, is causing some confusion in the market. In a 6 September article onNextplatform.com, Prat Moghe, founder and chief executive of Cazena—a services vendor—wrote that there is confusion regarding the availability of “canned analytics or reports”. According to him, vendors (solutions providers) should be carefully evaluated and aspects such as moving data sets between different cloud and on-premises systems, ease of configuration of the platform, etc., need to be kept in mind before making a purchase decision.

    “Some BDaaS providers make it easy to move datasets between different engines; others require building your own integrations. Some BDaaS vendors have their own analytics interfaces; others support industry-standard visualization tools (Tableau, Spotfire, etc.) or programming languages like R and Python. BDaaS vendors have different approaches, which should be carefully evaluated,” he wrote.

    Nevertheless, the teething troubles are likely to be far outweighed by the benefits that BDaaS brings to the table. The key drivers, according to the IDC report cited above, include digital transformation initiatives being undertaken by a lot of enterprises; the merging of real life with digital identity as all forms of personal data becomes available in the cloud; availability of multiple payment and usage options for BDaaS; and the ability of BDaaS to put more analytics power in the hands of business users.

    Another factor that will ensure growth of BDaaS is the scarcity of skills in cloud as well as analytics technologies. Compared to individual enterprises, cloud service providers such as Google, Microsoft Corp., Amazon Web Services and International Businsess Machines Corp. (IBM) can attract and retain talent more easily and for longer durations.

    Manish Mittal, managing principal and head of global delivery at Axtria, a medium-sized Big Data analytics solutions provider, says the adoption of BDaaS in India is often driven by business users. While the need is felt by both chief information officers and business leaders, he believes that the latter often drive adoption as they feel more empowered in the organisation.

    The potential for BDaaS in India can be gauged from Axtria’s year-on-year business growth of 60% for the past few years—and there are several niche big data analytics vendors currently operating in the country (besides large software companies).

    Mittal says that the growth of BDaaS adoption will depend on how quickly companies tackle the issue of improving data quality.

    Source: livemint.com, October 10, 2016
     

     

  • Competenties en mogelijkheden voor succes met (big) data analytics

    success-with-big-data-analytics-competencies-and-capabilities-for-the-journey691x200

    Voor bedrijven uit alle industrieën is big data analytics van grote waarde. Deze waarde ontstaat onder andere door een betere focus op de klant en het verbeteren van processen. Toch is het niet gemakkelijk om deze waarde er meteen uit te halen. Veel organisaties onderschatten de kosten, complexiteit en competenties om op dat punt te komen.

    Big data analytics

    Big data analytics helpt bij het analyseren van datasets die over het algemeen een stuk groter en gevarieerder zijn dan de data types uit traditionele business intelligence of datawarehouse omgevingen. Het doel van big data analytics is het herkennen van verborgen patronen, onbekende correlaties, markttrends, voorkeuren van de klant en andere informatieve bedrijfsinformatie.

    Waarom is succes behalen met big data lastig?

    Succes behalen met big data is niet vanzelfsprekend. Veel organisaties worstelen op verschillende aspecten met het inzetten van big data. De volgende aspecten kunnen worden onderscheiden:

    • Big data analytics wordt gezien als een technologie project en niet als een transformatie dat op verschillende fronten binnen de organisatie plaatsvindt.
    • Het ecosysteem van aanbieders is gefragmenteerd en veranderd snel.
    • Nieuwe technologieën en architecturen vragen om nieuwe vaardigheden van gebruikers.
  • Five Mistakes That Can Kill Analytics Projects

    Launching an effective digital analytics strategy is a must-do to understand your customers. But many organizations are still trying to figure out how to get business values from expensive analytics programs. Here are 5 common analytics mistakes that can kill any predictive analytics effort.

    Why predictive analytics projects fail

    failure of analytics

    Predictive Analytics is becoming the next big buzzword in the industry. But according to Mike Le, co-founder and chief operating officer at CB/I Digital in New York, implementing an effective digital analytics strategy has proven to be very challenging for many organizations. “First, the knowledge and expertise required to setup and analyze digital analytics programs is complicated,” Le notes. “Second, the investment for the tools and such required expertise could be high. Third, many clients see unclear returns from such analytics programs. Learning to avoid common analytics mistakes will help you save a lot of resources to focus on core metrics and factors that can drive your business ahead.” Here are 5 common mistakes that Le says cause many predictive analytics projects to fail.

    Mistake 1: Starting digital analytics without a goal

    “The first challenge of digital analytics is knowing what metrics to track, and what value to get out of them,” Le says. “As a result, we see too many web businesses that don’t have basic conversion tracking setup, or can’t link the business results with the factors that drive those results. This problem happens because these companies don’t set a specific goal for their analytics. When you do not know what to ask, you cannot know what you'll get. The purpose of analytics is to understand and to optimize. Every analytics program should answer specific business questions and concerns. If your goal is to maximize online sales, naturally you’ll want to track the order volume, cost-per-order, conversion rate and average order value. If you want to optimize your digital product, you’ll want to track how users are interact with your product, the usage frequency and the churn rate of people leaving the site. When you know your goal, the path becomes clear.”

    Mistake 2: Ignoring core metrics to chase noise

    “When you have advanced analytics tools and strong computational power, it’s tempting to capture every data point possible to ‘get a better understanding’ and ‘make the most of the tool,’” Le explains. “However, following too many metrics may dilute your focus on the core metrics that reveal the pressing needs of the business. I've seen digital campaigns that fail to convert new users, but the managers still setup advanced tracking programs to understand user 

    behaviors in order to serve them better. When you cannot acquire new users, your targeting could be wrong, your messaging could be wrong or there is even no market for your product - those problems are much bigger to solve than trying to understand your user engagement. Therefore, it would be a waste of time and resources to chase fancy data and insights while the fundamental metrics are overlooked. Make sure you always stay focus on the most important business metrics before looking broader.”

    Mistake 3: Choosing overkill analytics tools

    “When selecting analytics tools, many clients tend to believe that more advanced and expensive tools can give deeper insights and solve their problems better,” Le says. “Advanced analytics tools may offer more sophisticated analytic capabilities over some fundamental tracking tools. But whether your business needs all those capabilities is a different story. That's why the decision to select an analytics tool should be based on your analytics goals and business needs, not by how advanced the tools are. There’s no need to invest a lot of money on big analytics tools and a team of experts for an analytics program while some advanced features of free tools like Google Analytics can already give you the answers you need.”

    Mistake 4: Creating beautiful reports with little business value

    “Many times you see reports that simply present a bunch of numbers exported from tools, or state some ‘insights’ that has little relevance to the business goal,” Le notes. “This problem is so common in the analytics world, because a lot of people create reports for the sake of reporting. They don’t think about why those reports should exist, what questions they answer and how those reports can add value to the business. Any report must be created to answer a business concern. Any metrics that do not help answer business questions should be left out. Making sense of data is hard. Asking right questions early will

    help.”

    Mistake 5: Failing to detect tracking errors

    “Tracking errors can be devastating to businesses, because they produce unreliable data and misleading analysis,” Le cautions. “But many companies do not have the skills to setup tracking properly, and worse, to detect tracking issues when they happen. There are many things that can go wrong, such as a developer mistakenly removing the tracking pixels, transferring incorrect values, the tracking code firing unstably or multiple times, wrong tracking rule's logic, etc. The difference could be so subtle that the reports look normal, or are only wrong in certain scenarios. Tracking errors easily go undetected because it takes a mix of marketing and tech skills. Marketing teams usually don’t understand how tracking works, and development teams often don’t know what ‘correct’ means. To tackle this problem, you should frequently check your data accuracy and look for unusual signs in reports. Analysts should take an extra step to learn the technical aspect of tracking, so they can better sense the problems and raise smart questions for the technical team when the data looks suspicious.”

    Author: Mike Le

    Source: Information Management

  • Software kiest de beste sollicitant

    hh-6379374Sollicitanten interviewen is tijdverspilling. Wie beschikt over voldoende historische data en de juiste rekenmodellen, kan uit een stapel cv’s haarfijn destilleren wie er het meest geschikt is voor een bepaalde vacature. Sterker nog: als een wervingsspecialist maar voldoende gegevens heeft, kan hij voorspellen hoe goed iemand zal worden in zijn baan zonder diegene ooit gezien te hebben.

    Geraffineerd rekenmodel

    Voor de meeste bedrijven is het bovenstaande een verre toekomstschets, maar de technologie is er al, betoogt wetenschapper Colin Lee in zijn proefschrift. Hij promoveerde deze maand aan de Rotterdam School of Management (Erasmus Universiteit) op onderzoek waarin hij een geraffineerd rekenmodel gebruikt om patronen in meer dan 440.000 bestaande cv’s en sollicitaties te analyseren. Het model blijkt met 70% nauwkeurigheid te kunnen voorspellen wie er uiteindelijk werkelijk wordt uitgenodigd op gesprek, op basis van zaken als werkervaring, opleidingsniveau en vaardigheden.

    Intuïtie

    ‘Belangrijke voorspellers zijn relevantie van de werkervaring en het aantal dienstjaren. Je kunt die samenvoegen in een formule, en zo de beste match bepalen’, zegt Lee. Hoewel werkervaring bepalend is, zijn recruiters verder niet erg consequent in wat zij de doorslag laten geven, zo concludeert hij uit de patronen. ‘We kunnen daar wel een rode draad in herkennen, maar veel lijkt op basis van intuïtie te gebeuren.’

    Argwaan

    Waar Nederlandse bedrijven huiverig zijn om de analyse van 'big data' een centrale rol te geven bij werving en selectie, is die praktijk al jaren gemeengoed in Silicon Valley. Voorlopers als Google baseren hun aannamebeleid in de eerste plaats op harde data en algoritmen, gebaseerd op succesvolle wervingen uit het verleden. ‘Bedrijven zijn vaak extreem slecht in werving en mensen interviewen. Ze varen op gevoel en ongefundeerde theorieën’, zei directeur human resources Laszlo Bock van Google vorig jaar in een interview met het FD.

    Kan een bedrijf zich met louter data een weg banen naar de perfecte kandidaat? In Nederland heerst de nodige argwaan, en niet alleen over de nog onbewezen technologie. Ook ethische vraagstukken spelen een rol, zegt Lee. ‘De toekomst is dat je exact kunt becijferen hoe iemand gaat presteren op basis van de parameters in zijn cv. Dat is eng omdat je mensen op voorhand uitvlakt.’

    Optimale match

    Wervingssoftware wordt wel al langer in minder extreme vormen toegepast, bijvoorbeeld door grote uitzenders als Randstad, USG en Adecco. Die maken met speciale software een eerste voorselectie uit honderden, of zelfs duizenden cv’s. Dat gebeurt met behulp van zogenaamde 'applicant tracking systemen' (ATS). Dat zijn filters die zowel openbare gegevens op sociale media als interne databases van klanten gebruikt om te werven, of om te bepalen of een werknemer wel de optimale ‘match’ is in zijn huidige functie.

    ‘Vaak kunnen wij beter zien of iedereen binnen een bedrijf tot zijn recht komt dan dat bedrijf zelf’, zegt Jan van Goch van Connexys, een maker van wervingssoftware. De belangrijkste barrière voor verdere ontwikkeling van dit soort toepassingen is volgens hem niet zozeer de technologie, als wel de angst van klanten voor privacyinbreuk en aansprakelijkheid. Zij zitten vaak op bergen aan waardevolle historische informatie over hun sollicitanten, maar weigeren die te ontsluiten voor gebruik in grotere databases.

    Wetgeving

    Van Goch: ‘Als al die informatie bij elkaar komt, kunnen we nog veel slimmer matchen en werven. Klanten willen dat wel, maar ze geven zelf niet altijd toestemming om eigen gegevens te gebruiken en blijven er dus op zitten en dat is doodzonde. Een deel is bang om aangeklaagd te worden op het moment dat het op straat komt te liggen, des te meer sinds de wetgeving voor dataopslag is aangescherpt.’

    Source: FD

  • Wrangling and governing unstructured data

    Unstructured data is the common currency in this era of the Internet of Things (IoT), cognitive computing, mobility and social networks. It’s a core rebusinessIntelligence unstructuredsource for businesses, consumers and society in general. But it’s also a challenge to manage and govern.

    Unstructured data’s prevalence

    How prevalent is unstructured data? Sizing it up can give us a good sense for the magnitude of the governance challenge. If we look at the world around us, we see how billions of things become instrumented and interconnected, generating tons of data. In the Internet of Things, the value of things is measured not only by the data they generate, but also by the way those things securely respond to and interact with people, organizations and other things.

    If we look into public social networks such as Facebook, LinkedIn or Twitter, one of the tasks will be to know what the social network data contains to extract valuable information that can then be matched and linked to the master data. And mobile devices, enabled with the Global Positioning System (GPS), generate volumes of location data that is normally contained in very structured data sets. Matching and linking it to master data profiles will be necessary.

    The volume of unstructured information is growing as never before, mostly because of the increase

    of unstructured information that is stored and managed by enterprises, but is not really well understood. Frequently, unstructured data is intimately linked to structured data—in our databases, in our business processes and in the applications that derive value from it all. In terms of where we store and manage it, the difference between structured and unstructured data is usually that the former resides in databases and data warehouses and the latter in everything else.

    In format, structured data is generated by applications, and unstructured data is free form. In addition, like structured data, unstructured data usually has metadata associated with it. But not always, and therein lies a key problem confronting enterprise information managers in their attempts to govern it all comprehensively.

    Governance of the structured-unstructured data link

    When considering the governance of unstructured data, a focus on the business processes that generate both the data itself and any accompanying metadata is important. Unstructured data, such as audio, documents, email, images and video, is usually created in a workflow or collaboration application, generated by a sensor or other device, or produced upon ingestion into some other system or application. At creation, unstructured data is often but not always associated with structured data, which has its own metadata, glossaries and schemata.

    In some industries, such as oil and gas or healthcare, we handle the unstructured data that streams from the sensors where it originated. In any case, unstructured data is usually created or managed in a business process that is linked to some structured entity, such as a person or asset. Consider several examples: 

    • An insurance claim with structured data in a claims processing application and associated documents such as police records, medical reports and car images
    • A mortgage case file with structured data in a mortgage processing application and associated  pplicant employment status and house assessment documents
    • An invoice with structured data in an asset management application and associated invoice documents
    • An asset with records managed across different applications and associated engineering drawings 

    Governance challenges enter the picture as we attempt to link all this structured and unstructured information together. That linkage, in turn, requires that we understand dependencies and references and find the right data, which is often stored elsewhere in the enterprise and governed by different administrators, under different policies and in response to different mandates.

    What considerations complicate our efforts to combine, integrate and govern structured and unstructured data in a unified fashion? We must know how we control this information, how it is exchanged across different enterprises and what are the regulations and standards to secure delivery of its value and maintain privacy.

    We also need to understand what we are going to do with the data that we collect because just collecting data for future use, just in case, is not the solution for any problems. We can easily shift from competitive advantage to unmanageable complexity.

    Governance perspectives

    Across different industries in a complicated ecosystem of connected enterprises, we handle different types of information that is exchanged, duplicated, made anonymous and duplicated again. In analytics we handle predictive models to provide recommendations resulting in critical decision making. We need to think about models’ lifecycle and track the data sets used to develop such models as well as ownership changes.

    How can governance be applied here? When we speak about information, integration and governance, we usually get different answers. Some, such as a legal record manager, focus on unstructured data curation, document classification and retention to comply with internal policies and external legislation. On the other hand, data warehouse IT groups focus on structured and transactional data and its quality to maintain the best version of the truth.

    But the business usually doesn’t care about what type of information it is. What they want to see is the whole picture that will include all related information from structured, unstructured and other sources with proper governance around it. The importance for integrated metadata management became crucial.

    Data lifecycle governance environments

    To unify governance of structured and unstructured data, enterprises need to remove borders between information silos. In addition, organizations need to be connecting people and processes inside and outside the organization. And they need to make every effort to create trusted and collaborative environments for effective information configuration and management.

    What should span all information assets, both structured and unstructured, is a consistent set of organizational policies, roles, controls and workflows focused on lifecycle data governance.

    Author: Elizabeth Koumpan

    Source: Big Data & Analytics Hub

EasyTagCloud v2.8