40 items tagged "predictive analytics"

  • 10 Big Data Trends for 2017

    big-dataInfogix, a leader in helping companies provide end-to-end data analysis across the enterprise, today highlighted the top 10 data trends they foresee will be strategic for most organizations in 2017.
    “This year’s trends examine the evolving ways enterprises can realize better business value with big data and how improving business intelligence can help transform organization processes and the customer experience (CX),” said Sumit Nijhawan, CEO and President of Infogix. “Business executives are demanding better data management for compliance and increased confidence to steer the business, more rapid adoption of big data and innovative and transformative data analytic technologies.”
    The top 10 data trends for 2017 are assembled by a panel of Infogix senior executives. The key trends include:
    1.    The Proliferation of Big Data
        Proliferation of big data has made it crucial to analyze data quickly to gain valuable insight.
        Organizations must turn the terabytes of big data that is not being used, classified as dark data, into useable data.   
        Big data has not yet yielded the substantial results that organizations require to develop new insights for new, innovative offerings to derive a competitive advantage
    2.    The Use of Big Data to Improve CX
        Using big data to improve CX by moving from legacy to vendor systems, during M&A, and with core system upgrades.
        Analyzing data with self-service flexibility to quickly harness insights about leading trends, along with competitive insight into new customer acquisition growth opportunities.
        Using big data to better understand customers in order to improve top line revenue through cross-sell/upsell or remove risk of lost revenue by reducing churn.
    3.    Wider Adoption of Hadoop
        More and more organizations will be adopting Hadoop and other big data stores, in turn, vendors will rapidly introduce new, innovative Hadoop solutions.
        With Hadoop in place, organizations will be able to crunch large amounts of data using advanced analytics to find nuggets of valuable information for making profitable decisions.
    4.    Hello to Predictive Analytics
        Precisely predict future behaviors and events to improve profitability.
        Make a leap in improving fraud detection rapidly to minimize revenue risk exposure and improve operational excellence.
    5.    More Focus on Cloud-Based Data Analytics
        Moving data analytics to the cloud accelerates adoption of the latest capabilities to turn data into action.
        Cut costs in ongoing maintenance and operations by moving data analytics to the cloud.
    6.    The Move toward Informatics and the Ability to Identify the Value of Data
        Use informatics to help integrate the collection, analysis and visualization of complex data to derive revenue and efficiency value from that data
        Tap an underused resource – data – to increase business performance
    7.    Achieving Maximum Business Intelligence with Data Virtualization
        Data virtualization unlocks what is hidden within large data sets.
        Graphic data virtualization allows organizations to retrieve and manipulate data on the fly regardless of how the data is formatted or where it is located.
    8.    Convergence of IoT, the Cloud, Big Data, and Cybersecurity
        The convergence of data management technologies such as data quality, data preparation, data analytics, data integration and more.
        As we continue to become more reliant on smart devices, inter-connectivity and machine learning will become even more important to protect these assets from cyber security threats.
    9.    Improving Digital Channel Optimization and the Omnichannel Experience
        Delivering the balance of traditional channels with digital channels to connect with the customer in their preferred channel.
        Continuously looking for innovative ways to enhance CX across channels to achieve a competitive advantage.
    10.    Self-Service Data Preparation and Analytics to Improve Efficiency
        Self-service data preparation tools boost time to value enabling organizations to prepare data regardless of the type of data, whether structured, semi-structured or unstructured.
        Decreased reliance on development teams to massage the data by introducing more self-service capabilities to give power to the user and, in turn, improve operational efficiency.
    “Every year we see more data being generated than ever before and organizations across all industries struggle with its trustworthiness and quality. We believe the technology trends of cloud, predictive analysis and big data will not only help organizations deal with the vast amount of data, but help enterprises address today’s business challenges,” said Nijhawan. “However, before these trends lead to the next wave of business, it’s critical that organizations understand that the success is predicated upon data integrity.”
    Source: dzone.com, November 20, 2016
  • 4 benefits of predictive analytics improving healthcare

    4 benefits of predictive analytics improving healthcare

    There are so many wonderful ways predictive analytics will improve healthcare. Here are some of the potential benefits to consider.

    Medical care has relied on the education and expertise of doctors. Human error is common and 250,000 people per year die from medical errors. As this is the third-leading cause of death in the United States, limiting errors is a key focus in the healthcare industry.

    Big data and predictive analytics will lead to healthcare improvement.

    But how? Health IT Analytics previously published an excellent paper on some of the best use cases of predictive analytics in healthcare. We reviewed other papers on the topic and condensed the best benefits into this article.

    1. Diagnoses accuracy will improve

    Diagnoses accuracy will improve, and this will occur with the help of predictive algorithms. Surveys will be incorporated, which will ask the person that enters the emergency room with chest pain an array of questions.

    Algorithms could, potentially, use this information to determine if the patient should be sent home or if the patient is having a heart attack.

    Patients will still have insight from doctors who will use the information to assist in a diagnosis. The predictive analytics are not designed to replace a doctor’s advice.

    2. Early diagnoses and treatment options

    Big data will lead to earlier diagnoses, especially in deadly forms of cancer and disease. Annually, mesothelioma affects 2,000 to 3,000 people, but there’s a latency period that’s rarely less than 15 years and could be as long as 70 years.

    Predictive analysis will allow for doctors to put all of a person’s history into an algorithm to better determine the patient’s risk of certain diseases.

    And when a disease is found early on, treatment options are expanded. There are a variety of treatment options often available when a person is in good health. If doctors can predict a patient’s risk of cancer or certain illnesses, they can offer preventative care which may be able to slow the progression of the disease.

    Babylon Health already has raised $60 million to create a chatbot that will use an AI chatbot to help with patient diagnoses.

    3. Improve patient outcomes

    One study suggests that patient outcomes will improve by 30% to 40%, with the cost of treatment will be reduced by 50%. Medical imaging diagnosis will improve with an enhancement in care delivery, too. The introduction of predictive analytics will allow patients to live longer and have a better medical outlook as a result.

    Consumers will work with physicians in a collaborative manner to provide better overall health histories.

    Doctors will be able to create models that help predict health risks using genome analysis and family history to help.

    4. Changes for hospitals and insurance providers

    Hospitals and insurance providers will also see changes, initially bad changes. Through predictive analysis, patients will be able to seek diagnoses without going to the hospital. Wearables may be able to predict health issues that a person is likely to face.

    Revenues will initially be lost by hospitals, insurance companies and pharmacies that have fewer patients and errors sending patients to facilities.

    Hospitals and insurance companies will need to adapt to these changes or face losing profit and revenue in the process. Government funding may also increase in an effort to increase innovation in the market.

    Predictive analytics has the potential to help people live longer with better treatment options and predictive preventative care.

    Predictive analytics is the key solution to healthcare challenges

    Many healthcare challenges are still plaguing patients and healthcare providers around the United States. The good news is that new advances in predictive analytics are making it easier for healthcare providers to administer excellent care. Big data solutions will help healthcare providers lower healthcare costs and give patients excellent service that they expect and deserve.

    Author: Andrej Kovacevic

    Source: SmartDataCollective

  • A Shortcut Guide to Machine Learning and AI in The Enterprise


    Predictive analytics / machine learning / artificial intelligence is a hot topic – what’s it about?

    Using algorithms to help make better decisions has been the “next big thing in analytics” for over 25 years. It has been used in key areas such as fraud the entire time. But it’s now become a full-throated mainstream business meme that features in every enterprise software keynote — although the industry is battling with what to call it.

    It appears that terms like Data Mining, Predictive Analytics, and Advanced Analytics are considered too geeky or old for industry marketers and headline writers. The term Cognitive Computing seemed to be poised to win, but IBM’s strong association with the term may have backfired — journalists and analysts want to use language that is independent of any particular company. Currently, the growing consensus seems to be to use Machine Learning when talking about the technology and Artificial Intelligence when talking about the business uses.

    Whatever we call it, it’s generally proposed in two different forms: either as an extension to existing platforms for data analysts; or as new embedded functionality in diverse business applications such as sales lead scoring, marketing optimization, sorting HR resumes, or financial invoice matching.

    Why is it taking off now, and what’s changing?

    Artificial intelligence is now taking off because there’s a lot more data available and affordable, powerful systems to crunch through it all. It’s also much easier to get access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.

    Organizations today have also more comfortable with manipulating business data, with a new generation of business analysts aspiring to become “citizen data scientists.” Enterprises can take their traditional analytics to the next level using these new tools.

    However, we’re now at the “Peak of Inflated Expectations” for these technologies according to Gartner’s Hype Cycle — we will soon see articles pushing back on the more exaggerated claims. Over the next few years, we will find out the limitations of these technologies even as they start bringing real-world benefits.

    What are the longer-term implications?

    First, easier-to-use predictive analytics engines are blurring the gap between “everyday analytics” and the data science team. A “factory” approach to creating, deploying, and maintaining predictive models means data scientists can have greater impact. And sophisticated business users can now access some the power of these algorithms without having to become data scientists themselves.

    Second, every business application will include some predictive functionality, automating any areas where there are “repeatable decisions.” It is hard to think of a business process that could not be improved in this way, with big implications in terms of both efficiency and white-collar employment.

    Third, applications will use these algorithms on themselves to create “self-improving” platforms that get easier to use and more powerful over time (akin to how each new semi-autonomous-driving Tesla car can learn something new and pass it onto the rest of the fleet).

    Fourth, over time, business processes, applications, and workflows may have to be rethought. If algorithms are available as a core part of business platforms, we can provide people with new paths through typical business questions such as “What’s happening now? What do I need to know? What do you recommend? What should I always do? What can I expect to happen? What can I avoid? What do I need to do right now?”

    Fifth, implementing all the above will involve deep and worrying moral questions in terms of data privacy and allowing algorithms to make decisions that affect people and society. There will undoubtedly be many scandals and missteps before the right rules and practices are in place.

    What first steps should companies be taking in this area?
    As usual, the barriers to business benefit are more likely to be cultural than technical.

    Above all, organizations need to make sure they have the right technical expertise to be able to navigate the confusion of new vendors offers, the right business knowledge to know where best to apply them, and the awareness that their technology choices may have unforeseen moral implications.

    Source: timoelliot.com, October 24, 2016


  • Big Data Predictions for 2016

    A roundup of big data and analytics predictions and pontifications from several industry prognosticators.

    At the end of each year, PR folks from different companies in the analytics industry send me predictions from their executives on what the next year holds. This year, I received a total of 60 predictions from a record 17 companies. I can't laundry-list them all, but I can and did put them in a spreadsheet (irony acknowledged) to determine the broad categories many of them fall in. And the bigger of those categories provide a nice structure to discuss many of the predictions in the batch.

    Predictions streaming in
    MapR CEO John Shroeder, whose company just added its own MapR Streams component to its Hadoop distribution, says "Converged Approaches [will] Become Mainstream" in 2016. By "converged," Schroeder is alluding to the simultaneous use of operational and analytical technologies. He explains that "this convergence speeds the 'data to action' cycle for organizations and removes the time lag between analytics and business impact."

    The so-called "Lambda Architecture" focuses on this same combination of transactional and analytical processing, though MapR would likely point out that a "converged" architecture co-locates the technologies and avoids Lambda's approach of tying the separate technologies together.

    Whether integrated or converged, Phu Hoang, the CEO of DataTorrent predicts 2016 will bring an ROI focus to streaming technologies, which he summarizes as "greater enterprise adoption of streaming analytics with quantified results." Hoang explains that "while lots of companies have already accepted that real-time streaming is valuable, we'll see users looking to take it one step further to quantify their streaming use cases."

    Which industries will take charge here? Hoang says "FinTech, AdTech and Telco lead the way in streaming analytics." That makes sense, but I think heavy industry is, and will be, in a leadership position here as well.

    In fact, some in the industry believe that just about everyone will formulate a streaming data strategy next year. One of those is Anand Venugopal of Impetus Technologies, who I spoke with earlier this month. Venugopa, in fact, feels that we are within two years of streaming data becoming looked upon just another data source.

    Internet of predicted things
    It probably won't shock you that the Internet of Things (IoT) was a big theme in this year's round of predictions. Quentin Gallivan, Pentaho's CEO, frames the thoughts nicely with this observation: "Internet of Things is getting real!" Adam Wray, CEO at Basho, quips that "organizations will be seeking database solutions that are optimized for the different types of IoT data." That might sound a bit self-serving, but Wray justifies this by reasoning that this will be driven by the need to "make managing the mix of data types less operationally complex." That sounds fair to me.

    Snehal Antani, CTO at Splunk, predicts that "Industrial IoT will fundamentally disrupt the asset intelligence industry." Suresh Vasudevan, the CEO of Nimble Storage proclaims "in 2016 the IoT invades the datacenter." That may be, but IoT technologies are far from standardized, and that's a barrier to entry for the datacenter. Maybe that's why the folks at DataArt say "the IoT industry will [see] a year of competition, as platforms strive for supremacy." Maybe the data center invasion will come in 2017, then.

    Otto Berkes, CTO at CA Technologies, asserts that "Bitcoin-born Blockchain shows it can be the storage of choice for sensors and IoT." I hardly fancy myself an expert on blockchain technology, so I asked CA for a little more explanation around this one. A gracious reply came back, explaining that "IoT devices using this approach can transact directly and securely with each other...such a peer-to-peer configuration can eliminate potential bottlenecks and vulnerabilities." That helped a bit, and it incidentally shines a light on just how early-stage IoT technology still is, with respect to security and distributed processing efficiencies.

    Growing up
    Though admittedly broad, the category with the most predictions centered on the theme of value and maturity in Big Data products supplanting the fascination with new features and products. Essentially, value and maturity are proxies for the enterprise-readiness of Big Data platforms.

    Pentaho's Gallivan says that "the cool stuff is getting ready for prime time." MapR's Schroeder predicts "Shiny Object Syndrome Gives Way to Increased Focus on Fundamental Value," and qualifies that by saying "...companies will increasingly recognize the attraction of software that results in business impact, rather than focusing on raw big data technologies." In a related item, Schroeder predicts "Markets Experience a Flight to Quality," further stating that "...investors and organizations will turn away from volatile companies that have frequently pivoted in their business models."

    Sean Ma, Trifacta's Director of Product Management, looking at the manageability and tooling side of maturity, predicts that "Increasing the amount of deployments will force vendors to focus their efforts on building and marketing management tools." He adds: "Much of the capabilities in these tools...will need to replicate functionality in analogous tools from the enterprise data warehouse space, specifically in the metadata management and workflow orchestration." That's a pretty bold prediction, and Ma's confidence in it may indicate that Trifacta has something planned in this space. But even if not, he's absolutely right that this functionality is needed in the Big Data world. In terms of manageability, Big Data tooling needs to achieve not just parity with data warehousing and BI tools, but needs to surpass that level.

    The folks at Signals say "Technology is Rising to the Occasion" and explain that "advances in artificial intelligence and an understanding [of] how people work with data is easing the collaboration between humans and machines necessary to find meaning in big data." I'm not sure if that is a prediction, or just wishful thinking, but it certainly is the way things ought to be. With all the advances we've made in analyzing data using machine learning and intelligence, we've left the process of sifting through the output a largely manual process.

    Finally, Mike Maciag, the COO at AltiScale, asserts this forward-looking headline: "Industry standards for Hadoop solidify." Maciag backs up his assertion by pointing to the Open Data Platform initiative (ODPi) and its work to standardize Hadoop distributions across vendors. ODPi was originally anchored by Hortonworks, with numerous other companies, including AltiScale, IBM and Pivotal, jumping on board. The organization is now managed under the auspices of the Linux Foundation.

    Artificial flavor
    Artificial Intelligence (AI) and Machine Learning (ML) figured prominently in this year's predictions as well. Splunk's Antani reasons that "Machine learning will drastically reduce the time spent analyzing and escalating events among organizations." But Lukas Biewald, Founder and CEO of Crowdflower insists that "machines will automate parts of jobs -- not entire jobs." These two predictions are not actually contradictory. I offer both of them, though, to point out that AI can be a tool without being a threat.

    Be that as it may, Biewald also asserts that "AI will significantly change the business models of companies today." He expands on this by saying "legacy companies that aren't very profitable and possess large data sets may become more valuable and attractive acquisition targets than ever." In other words, if companies found gold in their patent portfolios previously, they may find more in their data sets, as other companies acquire them to further their efforts in AI, ML and predictive modeling.

    And more
    These four categories were the biggest among all the predictions but not the only ones, to be sure. Predictions around cloud, self-service, flash storage and the increasing prominence of the Chief Data Officer were in the mix as well. A number of predictions that stood on their own were there too, speaking to issues as far-reaching as salaries for Hadoop admins to open source, open data and container technology.

    What's clear from almost all the predictions, though, is that the market is starting to take basic big data technology as a given, and is looking towards next-generation integration, functionality, intelligence, manageability and stability. This implies that customers will demand certain baseline data and analytics functionality to be part of most technology solutions going forwards. And that's a great sign for everyone involved in Big Data.

    Source: ZDNet


  • Business Intelligence in 3PL: Mining the Value of Data

    data-mining-techniques-create-business-value 1In today’s business world, “information” is a renewable resource and virtually a product in itself. Business intelligence technology enables businesses to capture historical, current and predictive views of their operations, incorporating such functions as reporting, real-time analytics, data and process mining, performance management, predictive analytics, and more. Thus, information in its various forms and locations possesses genuine inherent value.
    In the real world of warehousing, the availability of detailed, up-to-the minute information on virtually every item in the operators’ custody, from inbound dock to delivery site, leads to greater efficiency in every area it touches. Logic would offer that greater profitability ensues.
    Three areas of 3PL operations seem to be most benefitted through savings opportunities identified through business intelligence solutions: labor, inventory, and analytics.
    In the first case, business intelligence tools can help determine the best use of the workforce, monitoring its activity in order to assure maximum effective deployment. The result: potentially major jumps in efficiency, dramatic reductions in downtime, and healthy increases in productivity and billable labor.
    In terms of inventory management, the metrics obtainable through business intelligence can stem inventory inaccuracies that would have resulted in thousands of dollars in annual losses, while also reducing write-offs.
    Analytics through business intelligence tools can also accelerate the availability of information, as well as provide the optimal means of presentation relative to the type of user. One such example is the tracking of real-time status of work load by room or warehouse areas; supervisors can leverage real-time data to re-assign resources to where they are needed in order to balance workloads and meet shipping times. A well-conceived business intelligence tool can locate and report on a single item within seconds and a couple of clicks.
    Extending the Value
    The value of business intelligence tools is definitely not confined to the product storage areas.
    With automatically analyzed information available in a dashboard presentation, users – whether in the office or on the warehouse floor – can view the results of their queries/searches in a variety of selectable formats, choosing the presentation based on its usefulness for a given purpose. Examples:
    • Status checks can help identify operational choke points, such as if/when/where an order has been held up too long; if carrier wait-times are too long; and/or if certain employees have been inactive for too long.
    • Order fulfillment dashboards can monitor orders as they progress through the picking, staging and loading processes, while also identifying problem areas in case of stalled processes.
    • Supervisors walking the floor with handheld devices can both encourage team performance and, at the same time, help assure efficient dock-side activity. Office and operations management are able to monitor key metrics in real-time, as well as track budget projections against actual performance data.
    • Customer service personnel can call up business intelligence information to assure that service levels are being maintained or, if not, institute measures to restore them.
    • And beyond the warehouse walls, sales representatives in the field can access mined and interpreted data via mobile devices in order to provide their customers with detailed information on such matters as order fill rates, on-time shipments, sales and order volumes, inventory turnover, and more.
    Thus, well-designed business intelligence tools not only can assemble and process both structured and unstructured information from sources across the logistics enterprise, but can deliver it “intelligently” – that is, optimized for the person(s) consuming it. These might include frontline operators (warehouse and clerical personnel), front line management (supervisors and managers), and executives.
    The Power of Necessity
    Chris Brennan, Director of Innovation at Halls Warehouse Corp., South Plainfield N.J., deals with all of these issues as he helps manage the information environment for the company’s eight facilities. Moreover, as president of the HighJump 3PL User Group, he strives to foster collective industry efforts to cope with the trends and issues of the information age as it applies to warehousing and distribution.
    “Even as little as 25 years ago, business intelligence was a completely different art,” Brennan has noted. “The tools of the trade were essentially networks of relationships through which members kept each other apprised of trends and happenings. Still today, the power of mutual benefit drives information flow, but now the enormous volume of data available to provide intelligence and drive decision making forces the question: Where do I begin?”
    Brennan has taken a leading role in answering his own question, drawing on the experience and insights of peers as well as the support of HighJump’s Enterprise 3PL division to bring Big Data down to size:
    “Business intelligence isn’t just about gathering the data,” he noted, “it’s about getting a group of people with varying levels of background and comfort to understand the data and act upon it. Some managers can glance at a dashboard and glean everything they need to know, but others may recoil at a large amount of data. An ideal BI solution has to relay information to a diverse group of people and present challenges for them to think through.”
    source: logisticviewpoints.com, December 6, 2016
  • Data als ingrediënt op weg naar digitale volwassenheid

    0cd4fbcf0a4f81814f388a75109da149ca643f45Stéphane Hamel deed op 21 januari de High Tech Campus in Eindhoven aan: dé kans voor een flinke dosis inspiratie door één van ’s wereld meest vooraanstaande denkers in digital analytics. Hamel lichtte op digital maturity day 2016 (#DMD2016) het Digital Analytics Maturity-model toe.

    Imperfecte data

    Volgens Stéphane Hamel is het verschil tussen een goede en een excellente analyst het volgende: de excellente analyst weet ook bij imperfecte data te komen tot beslissingen of zinvol advies. “Data will never be perfect, know how bad the data is is essential. If you know 5 or 10% is bad, there is no problem”, aldus Hamel.

    Analytics = Context + Data + Creativity

    Analytics klinkt als een vakgebied voor datageeks en nerds. Dat beeld klopt niet: buiten de data is het onderkennen van de context waarbinnen de data zijn verzameld en creativiteit bij het interpreteren ervan essentieel. Om data te begrijpen moet je vanachter je laptop of PC vandaan komen. Alleen door de wereld ‘daarbuiten’ mee te nemen in je analyse kun je als data-analist tot zinvolle inzichten en aanbevelingen komen.

    Hamel geeft een voorbeeld uit de collegebanken: toen een groep studenten de dataset van Save the Children uit 2010 te zien kreeg, dachten sommigen dat de factor 10 toename in websiteverkeer te danken was aan een campagne of toeval. De werkelijke oorzaak was de aardbeving in Haïti.

    Digital Maturity Assessment

    Het Digital Maturity Assessment-model is ontwikkeld aan de hand van de digitale transformatie van honderden bedrijven wereldwijd. Op basis van deze ervaringen weet Stéphane welke uitdagingen bedrijven moeten overwinnen op weg naar digital leadership.

    Digital Analytics Maturity SHamel

    Dit model kun je natuurlijk gebruiken om de eigen organisatie te benchmarken tegen andere bedrijven. De meerwaarde volgens Hamel zit echter in het ‘benchmarken van jezelf versus jezelf’. Het helpt kortom om het gesprek intern aan te gaan. Als je voor de derde keer van tooling switcht, ben je zelf het probleem, niet de technologie.

    Hamel geeft de voorkeur aan een consistente score op de vijf criteria van dit Digital Maturity Assessment-model: liever een twee overall dan uitschieters naar boven of beneden. De factor die meestal het zwakst scoort is ‘process’.

    Dit criterium staat voor de werkwijze om te komen tot dataverzameling, -analyse en -interpretatie. Vaak zit dit proces zelf helemaal niet zo slecht in elkaar, maar worstelen data-analisten om aan collega’s of het managementteam uit te leggen welke stappen ze hebben gezet. Hamel benadrukt daarom: “you need a digital culture, not a digital strategy”.

    Omhels de jongens van IT

    Geef IT de kans om jou echt te helpen. Niet door te zeggen ‘voer dit uit of fix dat’. Wel door IT te vragen om samen met jullie een probleem op te lossen. Hamel ziet digitale analisten daarom vooral als change-agents, niet als stoffige dataprofessionals. Juist die shift in benadering en rol betekent dat we binnenkort niet meer spreken over digital analytics, maar over ‘analytics’.

    Data is the raw material of my craft

    Hamel’s favoriete motto “data is the raw material of my craft” verwijst naar het vakmanschap en de passie die Stéphane Hamel graag aan het vakgebied digital analytics toevoegt. Stéphane’s honger om het verschil te maken in digital analytics werd ooit tijdens een directievergadering aangewakkerd. Hamel zat in die vergadering erbij als de ‘IT guy’ en werd niet serieus genomen toen hij met data de business problemen en kansen wilde benoemen.

    Dit prikkelde Hamel om, met steun van zijn baas, een MBA te gaan doen. En met resultaat: hij rondde de MBA af behorende tot de top 5 procent van alle studenten. Sindsdien opereert hij op het snijvlak van data en bedrijfsprocessen, ondermeer in het beurswezen en in de verzekeringsbranche.

    Digital is de grote afwezige in het onderwijs

    Hamel’s zeer indrukwekkende loopbaan tonen ondermeer een erkenning als een van ’s werelds weinige Certified Web Analysts, ‘Most Influential Industry Contributor’ door de Digital Analytics Association en mede-beheerder van de grootste community op Google+ over Google Analytics. Toch vindt Hamel zijn allergrootste prestatie het afwerpen van het stempel ‘IT’er’.

    Zijn grootste ambitie voor de nabije toekomst is het schrijven van een tekstboek over digital analytics. Er is veel informatie digitaal beschikbaar, maar er mist nog veel content in offline formaat. Juist omdat ook andere sprekers op #DMD16 wezen naar het achterblijvend niveau van onze HBO- en WO-opleidingen in digitale vaardigheden vroeg ik Hamel welke tips hij heeft voor het Nederlands onderwijs.

    In de basis dient volgens Hamel de component ‘digital’ veel meer als rode draad in het curriculum te worden opgenomen. Studenten dienen daarbij gestimuleerd te worden om de content zelf te verrijken met eigen voorbeelden. Zo komt er in cocreatie tussen docenten, auteurs en studenten steeds betere content tot stand.

    De belofte van big data en marketingautomatisering

    Hamel ziet zeker in B2B de toegevoegde waarde van marketing automation. Je relatie met klant en prospect is immers meer persoonlijk. Marketingautomatisering wordt echter soms foutief ingezet waarbij email wordt ingezet om de indruk te wekken van een persoonlijke, menselijke dialoog. Hamel: “I still believe in genuine, human interaction. There is a limit to how you can leverage marketingautomation.”

    Digital Maturity bron PREZI Joeri Verbossen

    Het grootste probleem bij de succesvolle introductie van marketingautomatisering is dan ook ook de maturiteit van de organisatie. Zolang deze niet voldoende is, zal een softwarepakket altijd vooral een kostenpost zijn. Een cultuuromslag moet plaatsvinden zodat de organisatie de software als noodzakelijke randvoorwaarde beschouwt voor het kunnen uitvoeren van de strategie.

    Dezelfde nuchtere woorden gebruikt Hamel over de belofte van big data. Al te vaak hoort hij in bedrijven: “We need Big Data!” Zijn antwoord is dan: “No, you don’t big data, you need solutions. As long as it does the job, I’m happy.”

    Source: Marketingfacts

  • Dataiku door Snowflake benoemd tot Data Science Partner of the Year

    Dataiku door Snowflake benoemd tot Data Science Partner of the Year

    Uitstekende prestaties op het gebied van technische integraties, technische vaardigheden, samenwerkingsopties en sales traction
    Dataiku heeft op de virtual partner summit van Snowflake de prijs voor Data Science Partner of the Year ontvangen. Door Dataiku’s partnership met Snowflake en de verregaande integraties zorgen de twee bedrijven ervoor dat ze naadloos enterprise-ready AI-oplossingen kunnen leveren. Hiermee kunnen klanten eenvoudig nieuwe data science-projecten, waaronder machine learning en deep learning, bouwen, implementeren en monitoren. 
    Dataiku is een belangrijk onderdeel van het Snowflake Data Cloud-ecosysteem en maakt het nog gemakkelijker voor Snowflake-klanten om de mogelijkheden van geavanceerde analyses en AI-gestuurde applicaties te benutten. Met Dataiku kunnen klanten van Snowflake gebruikmaken van een unieke visuele interface, die samenwerking tussen diverse gebruikers mogelijk maakt. Zowel business users als data-teams hebben toegang tot de Data Cloud om geautomatiseerde data pipelines, krachtige analyses en praktische AI-gedreven applicaties te ontwikkelen. 
    Dataiku en Snowflake kondigden een uitgebreidere samenwerking aan in november 2020 en in april 2021. Bovendien ontving Dataiku een nieuwe investering van Snowflake Ventures. Dataiku is daarnaast Elite-partner (het hoogst haalbare niveau voor Snowflake-partners) en ontving onlangs de Tech Ready-status voor de optimalisatie van Snowflake-integraties met de nadruk op best practices of het gebied van functionaliteit en performance. Dataiku biedt native integraties met Snowflake, waaronder authenticatie, native connectiviteit, data loading, data transformation en push-down van Dataiku naar Snowflake-computing voor feature engineering en het terugschrijven van predictions naar Snowflake. 
    Dataiku’s benoeming tot Data Science Partner of the Year illustreert het gezamenlijke doel om gemeenschappelijke klanten, nu zo’n 80 bedrijven waaronder Novartis en DAZN, de beste data-ervaring te bieden. De krachtige combinatie van Snowflake's realtime performance en bijna onbeperkte schaalbaarheid met Dataiku's machine learning en mogelijkheden op het gebied van model management vergroten de mogelijkheden op het gebied van AI voor klanten in verschillende sectoren. Dit wordt versterkt door het recent geïntroduceerde Dataiku Online, een compleet SaaS-aanbod dat is geïntegreerd met Snowflake en zorgt voor zeer vlotte behandeling van klantdata en snelle time-to-value voor klanten van elke bedrijfsgrootte. 
    "Bedrijven zijn op zoek naar manieren om hun activiteiten te verbeteren met behulp van AI. Hierom is het belangrijker dan ooit om data science tools te hebben die snel en gemakkelijk kunnen worden geïntegreerd”, zegt Florian Douetteau. CEO bij Dataiku. “We zijn erg blij met de erkenning van Snowflake's Data Science Partner of the Year. Voor ons is het een bevestiging van ons gezamenlijke doel om data science-workflows te vereenvoudigen en te verbeteren, en  de kracht van geavanceerde analyses voor bedrijven te ontsluiten.” 
    Snowflake en Dataiku werken aan meer innovatieve oplossingen die klanten snel toegang zullen verlenen tot nieuwe cloud data die essentieel zijn voor het ontwikkelen van krachtige AI-gestuurde applicaties en processen. Met de push-down-architectuur van Dataiku kunnen klanten profiteren van een prijs gebaseerd op gebruik per seconde voor data processing, AI-ontwikkeling en het uitvoeren van data in AI pipelines.
    De samenwerking zal verder worden versterkt door Snowpark, de nieuwe omgeving voor ontwikkelaars in Snowflake. Snowpark, dat momenteel in preview is, biedt data engineers, data scientists en ontwikkelaars de mogelijkheid om code te schrijven in talen zoals Scala, Java en Pythonmet behulp van vertrouwde programmeerconcepten. Hierna kunnen workloads in Snowflake uitgevoerd worden, zoals data transformation, data preparation en feature engineering.  
    "Als partner heeft Dataiku geholpen bij het verkopen, integreren en implementeren van gezamenlijke technologieën voor tientallen gedeelde klanten. We zijn blij om hun inspanningen te kunnen belonen met de Snowflake Data Science Partner of the Year Award”, zegt Colleen Kapase, SVP van WorldWide Partners and Alliances bij Snowflake." Onze samenwerking met Dataiku helpt gezamenlijke klanten om AI en ML op praktische, schaalbare manieren te benutten. We kijken erg uit naar de innovaties en positieve resultaten bij klanten, die voortkomen uit deze samenwerking.” 
    Bron: Dataiku
  • De 5 beloftes van big data

    4921077Big data is een fenomeen dat zichzelf moeilijk laat definiëren. Velen zullen gehoord hebben van de 3 V’s: volume, velocity en variety. Kortgezegd gaat big data over grote volumes, veel snelheid (realtime) en gevarieerde/ongestructureerde data. Afhankelijk van de organisatie kent big data echter vele gezichten.

    Om te analyseren hoe big data het beste in een bedrijf geïntegreerd kan worden, is het van belang eerst duidelijk in beeld te hebben wat big data precies biedt. Dit is het beste samen te vatten in de volgende viif beloftes:

    1. Predictive: Big data genereert voorspellende resultaten die iets zeggen over de toekomst van uw organisatie of resultaat van een concrete actie;
    2. Actionable results: Big data levert mogelijkheden op voor directe acties op gevonden resultaten, zonder menselijke interventie;
    3. Realtime: De nieuwe snelheidsnormen zorgen dat je direct kunt reageren op nieuwe situaties;
    4. Adaptive: Een goed ontworpen model past zich constant automatisch aan wanneer situaties en relaties veranderen;
    5. Scalable: Verwerking en opslagcapaciteit is lineair schaalbaar, waardoor u flexibel kunt inspelen op nieuwe eisen.

    Deze vijf big data beloftes kunnen alleen worden gerealiseerd met inzet van drie big data disciplines/rollen: De big data scientist, de big data engineer en de big data infrastructuur specialist.


    In een klassieke Business Intelligence omgeving worden rapportages gegenereerd over de huidige status van het bedrijf. In het geval van big data praat men echter niet over het verleden of de huidige situatie, maar over predictive analytics.

    Voorspellende rapportages worden mogelijk gemaakt doordat de data scientist patroonherkenningstechnieken toepast op historische data en de gevonden patronen uitwerkt in een model. Het model kan vervolgens de historie inladen en op basis van actuele events/transacties de patronen doortrekken naar de toekomst. Op deze manier kan een manager schakelen van reactief management naar anticiperend management.

    Actionable results

    Actionable results ontstaan wanneer gevonden resultaten uit de modellen van de data scientist direct worden vertaald naar beslissingen in bedrijfsprocessen. Hierbij maakt de data engineer de koppeling en zorgt de data scientist dat het model de output in het juiste formaat aanbiedt. De belofte van actionable results wordt zodoende deels ingelost door de big data-specialisten, echter komt het grootste deel voor rekening van de attitude van het management team.

    Het management heeft de taak om een nieuwe manier van sturing aan te wenden. Er wordt niet meer gestuurd op de micro-processen zelf, maar op de modellen die deze processen automatiseren. Zo wordt er bijvoorbeeld niet meer gestuurd op wanneer welke machine onderhouden moet worden, maar welke risicomarges het beslissende model mag hanteren om de onderhoudskosten te optimaliseren.


    Bij big data wordt vaak gedacht aan grote volumes van terabytes aan data die verwerkt moeten worden. De 'big' van big data is echter geheel afhankelijk van de dimensie van snelheid. Zo is 10 TB aan data verwerken in een uur big data, maar 500 MB verwerken is ook big data als de eis is dat dit in tweehonderd milliseconde moet gebeuren. Realtime verwerking ligt in dat laatste hogesnelheidsdomein van verwerking. Er is geen gouden regel, maar men spreek vaak van realtime wanneer de reactiesnelheid binnen vijfhonderd milliseconde is. Om deze hoge snelheden te realiseren is een combinatie van alle drie de big data disciplines nodig.

    De big data infrastructuur specialist heeft de taak om het opslaan en inlezen van data te optimaliseren. Snelheidsoptimalisatie vind je door de data geheel te structureren op de manier waarop het door het model wordt ingelezen. Zo laten we alle flexibiliteit in de data los om deze vanuit één perspectief zo snel mogelijk te consumeren.

    De big data engineer realiseert dit door de snelheid van de koppelingen tussen de databronnen en de afnemers te optimaliseren, door de koppelingen in een gedistribueerd format aan te bieden. Zo kunnen een theoretisch oneindig aantal resources worden aangeschakeld om de data gedistribueerd te krijgen en elke verdubbeling van resources zorgt voor een verdubbeling van capaciteit. Ook is het aan de big data engineer om de modellen die de data scientist ontwikkelt om te zetten in een format dat alle sub-analyses van het model isoleert - en zoveel mogelijk distribueert over de beschikbare resources. Data scientists werken vaak in programmeertalen als R en Matlab, die ideaal zijn voor het exploreren van de data en de verschillende mogelijke modellen. Deze talen lenen zich echter niet goed voor distributed processing en de big data engineer moet daarom vaak in samenwerking met de data scientist een vertaling van het prototype model verwezenlijken in een productiewaardige programmeertaal als Java of Scala.

    De data scientist verzorgt zoals besproken de modellen en daarmee de logica van de dataverwerking. Om realtime te kunnen opereren is het de taak aan deze persoon om de complexiteit van de dataverwerking in te perken tot een niveau beneden exponentieel. Zodoende is een samenwerking van de drie disciplines vereist om tot een optimaal resultaat te komen.


    We kunnen spreken van een adaptive omgeving - ook wel machine learning of artificial intelligence genoemd - wanneer de intelligentie van deze omgeving zich autonoom aanpast aan nieuwe ontwikkelingen binnen het te modelleren domein. Om dit mogelijk te maken is het belangrijk dat het model genoeg ervaring heeft opgedaan om zelf te kunnen leren. Hoe meer informatie er beschikbaar is over het model door de geschiedenis heen, hoe breder de ervaring is die we op kunnen doen.


    Schaalbaarheid wordt bereikt wanneer er een theoretisch oneindige verwerkingscapaciteit is als oneindig veel computers worden bijgeschakeld. Dit betekent wanneer je vier keer zoveel capaciteit nodig hebt, vier keer zoveel computers worden bijgeschakeld - en wanneer je duizend keer meer nodig hebt er duizend computers worden toegevoegd. Dit lijkt eenvoudig, maar tot voorheen was deze samenwerking tussen computers een zeer complexe taak.

    Iedere discipline heeft een rol in het schaalbaar maken en schaalbaar houden van big data-oplossingen. Zo verzorgt de big data infrastructuur specialist de schaalbaarheid van het lezen, schrijven en opslaan van data. De big data engineer verzorgt de schaalbaarheid van het consumeren en produceren van data en de big data scientist verzorgt de schaalbaarheid van de intelligente verwerking van de data.

    Big data, big deal?

    Om van de volledige mogelijkheden van big data gebruik te maken is het dus van groot belang een multidisciplinair team in te schakelen. Dit klinkt wellicht alsof er direct zeer grote investeringen gedaan moeten worden, echter biedt big data ook de mogelijkheid om klein te beginnen. Dit kan door een data scientist de verschillende analyses te laten doen op een laptop of een lokale server, om zo met een minimale investering een aantal ‘short-term wins’ voor je organisatie te creëren. Wanneer je de toegevoegde waarde van big data inzichtelijk hebt, is het een relatief kleine stap om een big data omgeving in productie te zetten en zodoende ook jouw organisatie op een data-gedreven manier te kunnen sturen.

    Source: Computable

  • Five Mistakes That Can Kill Analytics Projects

    Launching an effective digital analytics strategy is a must-do to understand your customers. But many organizations are still trying to figure out how to get business values from expensive analytics programs. Here are 5 common analytics mistakes that can kill any predictive analytics effort.

    Why predictive analytics projects fail

    failure of analytics

    Predictive Analytics is becoming the next big buzzword in the industry. But according to Mike Le, co-founder and chief operating officer at CB/I Digital in New York, implementing an effective digital analytics strategy has proven to be very challenging for many organizations. “First, the knowledge and expertise required to setup and analyze digital analytics programs is complicated,” Le notes. “Second, the investment for the tools and such required expertise could be high. Third, many clients see unclear returns from such analytics programs. Learning to avoid common analytics mistakes will help you save a lot of resources to focus on core metrics and factors that can drive your business ahead.” Here are 5 common mistakes that Le says cause many predictive analytics projects to fail.

    Mistake 1: Starting digital analytics without a goal

    “The first challenge of digital analytics is knowing what metrics to track, and what value to get out of them,” Le says. “As a result, we see too many web businesses that don’t have basic conversion tracking setup, or can’t link the business results with the factors that drive those results. This problem happens because these companies don’t set a specific goal for their analytics. When you do not know what to ask, you cannot know what you'll get. The purpose of analytics is to understand and to optimize. Every analytics program should answer specific business questions and concerns. If your goal is to maximize online sales, naturally you’ll want to track the order volume, cost-per-order, conversion rate and average order value. If you want to optimize your digital product, you’ll want to track how users are interact with your product, the usage frequency and the churn rate of people leaving the site. When you know your goal, the path becomes clear.”

    Mistake 2: Ignoring core metrics to chase noise

    “When you have advanced analytics tools and strong computational power, it’s tempting to capture every data point possible to ‘get a better understanding’ and ‘make the most of the tool,’” Le explains. “However, following too many metrics may dilute your focus on the core metrics that reveal the pressing needs of the business. I've seen digital campaigns that fail to convert new users, but the managers still setup advanced tracking programs to understand user 

    behaviors in order to serve them better. When you cannot acquire new users, your targeting could be wrong, your messaging could be wrong or there is even no market for your product - those problems are much bigger to solve than trying to understand your user engagement. Therefore, it would be a waste of time and resources to chase fancy data and insights while the fundamental metrics are overlooked. Make sure you always stay focus on the most important business metrics before looking broader.”

    Mistake 3: Choosing overkill analytics tools

    “When selecting analytics tools, many clients tend to believe that more advanced and expensive tools can give deeper insights and solve their problems better,” Le says. “Advanced analytics tools may offer more sophisticated analytic capabilities over some fundamental tracking tools. But whether your business needs all those capabilities is a different story. That's why the decision to select an analytics tool should be based on your analytics goals and business needs, not by how advanced the tools are. There’s no need to invest a lot of money on big analytics tools and a team of experts for an analytics program while some advanced features of free tools like Google Analytics can already give you the answers you need.”

    Mistake 4: Creating beautiful reports with little business value

    “Many times you see reports that simply present a bunch of numbers exported from tools, or state some ‘insights’ that has little relevance to the business goal,” Le notes. “This problem is so common in the analytics world, because a lot of people create reports for the sake of reporting. They don’t think about why those reports should exist, what questions they answer and how those reports can add value to the business. Any report must be created to answer a business concern. Any metrics that do not help answer business questions should be left out. Making sense of data is hard. Asking right questions early will


    Mistake 5: Failing to detect tracking errors

    “Tracking errors can be devastating to businesses, because they produce unreliable data and misleading analysis,” Le cautions. “But many companies do not have the skills to setup tracking properly, and worse, to detect tracking issues when they happen. There are many things that can go wrong, such as a developer mistakenly removing the tracking pixels, transferring incorrect values, the tracking code firing unstably or multiple times, wrong tracking rule's logic, etc. The difference could be so subtle that the reports look normal, or are only wrong in certain scenarios. Tracking errors easily go undetected because it takes a mix of marketing and tech skills. Marketing teams usually don’t understand how tracking works, and development teams often don’t know what ‘correct’ means. To tackle this problem, you should frequently check your data accuracy and look for unusual signs in reports. Analysts should take an extra step to learn the technical aspect of tracking, so they can better sense the problems and raise smart questions for the technical team when the data looks suspicious.”

    Author: Mike Le

    Source: Information Management

  • Hoe onderscheiden data gedreven organisaties zich echt?

    We are data driven Image

    Je hoort het vaak in bestuurskamers: we willen een data-driven organisatie zijn. We willen aan de slag met IoT, (predictive) analytics of location based services. En ja, dat zijn sexy toepassingen. Maar wat zijn de werkelijke business drivers? Die blijven vaak onderbelicht. Onderzoek laat zien op welke terreinen organisaties met een hoge ‘datavolwassenheid’ vooroplopen.

    SAS ondervroeg bijna 600 beslissers en kon op basis van de antwoorden de respondenten onderverdelen in drie groepen: de koplopers, een middengroep en de achterblijvers. Zo ontstaat goed zicht op waarin de koplopers zich onderscheiden van de achterblijvers.

    Het eerste wat opvalt is de proactieve houding. Koplopers maken budget vrij om oude processen en systemen te vervangen en investeren in de uitdaging van data-integratie. Er heerst bovendien een cultuur van ‘continuous improvement’. Deze bedrijven zijn voortdurend actief op zoek naar verbetermogelijkheden. Dit in tegenstelling tot de achterblijvers, die pas willen investeren in verbeteringen als ze precies weten hoe hoog de ROI is.

    De koplopers vervangen hun oude systemen het vaakst door open source data platformen, waarbij Hadoop verreweg het meest populaire platform is. Behalve in technologie investeren deze bedrijven ook meer in het opschonen van data. Ze hebben goede processen ingericht om ervoor te zorgen dat data up-to-date en van de juiste kwaliteit is voor het beoogde gebruik. En ook de governance op deze processen is beter dan in de bedrijven die achterblijven (lees hier over het verhogen van de ROI op data en IT).

    Ook investeren koplopers meer in talent. 73 procent van deze bedrijven heeft een dedicated datateam dat wordt bezet met eigen mensen. De achterblijvers hebben vaker ofwel helemaal geen datateam ofwel een team dat wordt ingevuld door externe mensen. Koplopers investeren ook meer in werving en selectie van gekwalificeerd personeel. Daardoor ondervindt ‘slechts’ 38 procent van de koplopers een tekort aan interne vaardigheden, tegenover 62 procent van de achterblijvers.

    Dit alles leidt ertoe dat koplopers beter zijn voorbereid op de GDPR-regelgeving, die in 2018 zijn intrede doet.

    Ze zijn beter in staat om de risico’s te benoemen die verbonden zijn aan een data-driven strategie en ze hebben maatregelen genomen om deze risico’s af te dekken of te verkleinen.

    De komst van de GDPR is voor veel organisaties een aanleiding om te investeren in een goede datastrategie. Maar dit is niet de enige reden. Bedrijven met een hoge datavolwassenheid kunnen:

    • sneller ingewikkelde vragen beantwoorden
    • sneller beslissingen nemen
    • sneller innoveren en groeien
    • de klantervaring verbeteren
    • groei realiseren in omzet en marktaandeel
    • kortere time-to-market voor nieuwe producten en diensten realiseren
    • business processen optimaliseren
    • betere strategische plannen en rapportages maken

    Alle reden dus om écht in data governance en data management te investeren en niet alleen maar te roepen dat je organisatie data-driven is. 90 procent van de ondervraagden vindt zichzelf namelijk datagedreven, maar de realiteit is helaas minder rooskleurig.

    Interesse in de volledige onderzoeksresultaten?
    Download hier het rapport ‘How data-driven organisations are winning’.


    Bron: Rein Mertens (SAS)

    In: www.Analyticstoday.nl

  • Hoe waarde creatie met predictive analysis en datamining

    De groeiende hoeveelheid data brengt een stortvloed aan vragen met zich mee. De hoofdvraag is wat we met die data kunnen Data miningen betere diensten aangeboden kunnen worden en risico’s vermeden? Helaas blijft bij de meeste bedrijven die vraag onbeantwoord. Hoe kunnen bedrijven waarde aan data toevoegen en overgaan tot predictive analytics, machine learning en decision management?

    Predictive analytics: de glazen bol voor de business

    Via data mining worden verborgen patronen in gegevens zichtbaar waardoor de toekomst voorspeld kan worden. Bedrijven, wetenschappers en overheden gebruiken al tientallen jaren dit soort methoden om vanuit data inzichten voor toekomstige situaties te verkrijgen. Moderne bedrijven gebruiken data data mining en predictive analytics om onder andere fraude op te sporen, cybersecurity te voorkomen en voorraadbeheer te optimaliseren. Dankzij een iteratief analytisch proces brengen zij data, verkenning van de data en de inzet van de nieuwe inzichten uit de data samen.

    Data mining: business in de lead

    Decision management zorgt dat deze inzichten worden omgezet in acties in het operationele proces. De vraag is hoe dit proces binnen een bedrijf vorm te geven. Het begint altijd bij een vraag vanuit de business en eindigt bij een evaluatie van de acties. Hoe deze Analytical Life Cycle eruit ziet en welke vragen relevant zijn per branche, leest u in de Data Mining From A to Z: How to Discover Insights and Drive Better Opportunities.


    Naast dit model waarin duidelijk wordt hoe uw bedrijf dit proces kan inzetten, wordt dieper ingegaan op de rol van data mining in het stadium van onderzoek. Door dit verder uit te diepen via het onderstaande stappenplan kan nog meer waarde uit data worden gehaald.

    1. Business-vraag omvormen tot een analytische hypothese

    2. Data gereedmaken voor data mining

    3. Data verkennen

    4. Data in een model plaatsen

    Wilt u weten hoe uw bedrijf ook data in kan zetten om de vragen van morgen te kunnen beantwoorden en een betere service kan verlenen? Download dan “Data Mining From A to Z: How to Discover Insights and Drive Better Opportunities.”

  • How COVID-19 related uncertainties impact predictive analytics

    How COVID-19 related uncertainties impact predictive analytics

    Rapid, sometimes dramatic change has become 'the new normal', but what we know now can help us improve predictive model accuracy.

    Businesses have had to grapple with a lot of change caused by the COVID-19 pandemic. One of the obvious side effects was compromised predictive model accuracy. What worked well in 2019 won't work as well or at all in 2020 if the training data is out of sync with what's happening now.

    In the beginning

    COVID-19's effects are truly novel. While there have been other pandemics and financial crises in recent history, none of them exactly mirror what's happened in 2020. The Spanish Flu pandemic may be the closest parallel, but there's little data available about it compared to the 2008 financial crisis, for example.

    Unlike the early days of the COVID-19 pandemic, there's now more information about its effects on organizational and customer behavior. However, at any moment, the current situation could change, such as a second round of shutdowns.

    "We need to remind ourselves to be incredibly agile when it comes to building models," said Drew Farris, director of AI research at Booz Allen Hamilton. "I've encountered some environments in the past where they roll out one new model every six months, and that's just not tenable. I think increasing modeling agility through automation is more relevant now than any other time, just simply because the data is changing so quickly."

    Continued uncertainty

    By now, it's obvious that the pandemic and its effects won't disappear anytime soon, so organizations and data scientists need to be able to adapt as necessary.

    "As a data scientist, you need to be  willing to challenge your assumptions, toss out the theories that you had yesterday and formulate new ones, but then also run the experiments with the data to be able to prove or disprove those hypotheses," said Farris. "To the extent that you can use automated tooling to do that, it's very important."

    The companies in the best position to adapt to sudden change have been modernizing their tech stacks to become more agile. Nevertheless, they still need a way to identify signals that indicate future trends. Booz Allen Hamilton was recently doing some work involving linear regressions, but it switched to agent-based modeling.

    "It's basically setting up a dynamic system where you have individual actors in that system that you're modeling out, and you're using the data about these actors to figure out what steps will happen next," said Farris. "It's really nothing new, but the bottom line is it allows us to look more forward into the future by analyzing system dynamics, as opposed to just sort of measuring the data that we're seeing from past history."

    Given the constant state of change, it's important for organizations to be able to respond and adapt to changing circumstances by identifying multiple sources of data that can provide a complete perspective of what's taking place.

    "It's gotten to the point, or we're rapidly getting to a point, where it is considerably less expensive to run a plethora of other models to understand different outcomes," said Farris. "I think if there's any takeaway that I have in this particular situation, it is that we have that ability to generate so much scale, do some really oddball stuff like run a model that expects the unexpected. Don't be afraid to introduce complete and total randomness and look for wacky outcomes. Really don't discount them for potentially what they might be showing you or telling you because that ultimately might prepare you for the next crisis."

    Scenario modeling helps prepare organizations for change

    The future has always been uncertain. However, the global and systemic impacts of the COVID-19 pandemic have resulted in an unprecedented level of uncertainty in the modern business environment.

    "We have seen from the business world increased requests [for] and usage of analytics and AI machine learning models and more importantly, simulation models, which can simulate different scenarios", said Anand Rao, global & US artificial intelligence and US data & analytics leader at PwC. "We're also seeing various new techniques being used in AI, so the old techniques and new techniques being combined."

    Business leaders have been seeking advice about what they should be doing these last few months because their past experience has not prepared them for recent events.

    "Executives basically start to say, I don't know what to do. I don't know where this is going," said Rao. "Is there any way that you guys can come up with anything more than just tossing a coin because any technique is better than my random choice."

    The beauty of scenario modeling is it provides opportunities to plan for different possible futures, such as understanding the impacts of future government intervention on supply and demand or how different scenarios might affect business operations, staffing requirements or customer concerns. That way, should one of the scenarios become reality, business leaders know in advance what they should be doing.

    Rao also said that data scientists need to develop their own version of agile so they can build and deploy models faster than they have before.

    "This is something we should have adopted before the pandemic," said Rao. "Now people are looking more at how to develop models in a much faster cycle because you don't have six to eight weeks."

    Author: Lisa Morgan

    Source: InformationWeek 

  • How the need for data contributes to the evolution of the CFO role

    How the need for data contributes to the evolution of the CFO role

    The role of the CFO has evolved in recent years from the person in control of the purse strings, to the trusted right hand of the CEO.

    Their importance was further enhanced during the pandemic as they were often required to oversee changes in a matter of days or even hours that would normally have taken months or years to bring to fruition. They are no longer just the person in charge of the money, but a strategic planner whose insights and counsel inform some of the company’s biggest business decisions.

    But although their role has evolved, the technology which helps them is still playing catch-up, with the lack of reliable analytics and data one of the biggest hurdles to progress.

    A changing role and the need for data

    The finance function has traditionally been known for its stability and process-based culture. But Covid brought about a need for quick-fire decisions, where the rule book had to be re-written overnight or thrown out completely. Data has always been central to agile business planning, forecasting and analysis – all tools which have become central to the modern CFO role. The pandemic sped up the need for this type of tech, yet many firms still lack these insights to do the job properly. 

    This level of data collection and insight requires the right technology. But in a survey of CFOs by Ernst and Young, eight out of 10 respondents said legacy technology and system complexity was one of their top three inhibitors to progress. 

    Data that tells ‘one truth’

    Companies are awash with information, each different department has their own KPIs and methods of reporting. But what CFOs really need if they are to perform their modern role well is not just data, but to be able to connect the dots and gain an holistic view.

    They need integrated financial insights on accounting and finances data, better traceability and operational reporting on things such as customer segmentation, products and revenue assurance.

    To enable this, Teradata’s multi-cloud software integrates data fast across finance systems – such as Oracle, PeopleSoft, and SAP – to connect and  integrate in near real-time. It also has a pre-built financial data model that’s ready to receive and structure data to enable controlled, user-friendly access. This all helps reconcile data from a wide variety of different sources into a trusted, compliant platform.

    Turning insights into action

    Just one example of how this level of data insight has helped a firm was when a global retail company needed to modernize their analytics ecosystem. Their processes were manual and time consuming. Their spreadsheet-based model results couldn’t feed downstream models and analytics.

    Crucially, a lack of trust in the model also meant analytics results had limited use to the business. The company worked with Teradata to create a finance-driven data foundational model. It provided in depth detail into things like revenue and costs from aggregate views of branches, products, vendors and customers.

    This information enabled the financial justification for strategic business decisions. It’s this level of detail that can continue to enable CFOs to retain their position of trusted advisor to the CEO and an indispensable asset to the company.

    Source: CIO

  • How to Sell Your C-suite on Advanced Analytics

    Advanced AnalyticsBut just because businesses are more open to exploring analytics and executives are dropping data science buzzwords in meetings doesn’t mean you don’t still have to sell your C-suite on investing in such technology. Analytics done right with the best tools and a skilled staff can get extremely expensive, and your C-suite isn’t just going to write you a blank check, especially if you can’t communicate how this investment will positively impact the bottom line.

    As the co-founder of Soothsayer Analytics, which applies artificial intelligence to build analytics tools for companies, Christopher Dole has experienced firsthand how difficult it can be to sell senior leadership on the ROI of advanced analytics.

    Since the founding of his company two years ago, he has continued to hone his pitch on prescriptive analytics, and he’s learned what information C-suite executives look for both before and after the launch of an analytics platform. He listed four pieces of advice for how to not only pitch an analytics program, but also ensure its continued success after its launch.

    Do your homework

    Prior to even scheduling a meeting with senior leadership, you must first arm yourself with the answers to every question that might get thrown your way.

    “I would definitely plan on meeting with any relevant colleagues, peers, or other internal stakeholders about issues and opportunities that they’d like to address,” said Dole. “And once you have some ideas you should also, in advance, meet with your data team and identify any relevant data — preferably data that’s clean and comprehensive — so then when you’re actually in front of the C-suite or board you can start by clearly defining where you’re currently at in the analytics journey, whether it’s the descriptive, diagnostic, predictive, or prescriptive level. If leadership says that your company is already doing analytics, yet they can’t predict what will happen or what can be done to perturb it, then they aren’t really doing analytics, and you should clearly articulate that.”

    It’s also important during your research to find examples of other companies’ experience with analytics solutions similar to the ones you’re proposing.

    “Talk about the value it created for them,” said Dole. “So, for example, if you’re starting on an analytics initiative and you’re a telecom provider, talk about how a competitor tapped into their stream of customer data to reduce churn and provide millions of dollars per year of savings.” When generating a list of examples, he said, try to focus more on instances that generated revenue or prevented losses as opposed to reduced waste. “Making money is often seen as sexier than saving money.”

    Start with the low hanging fruit

    If you’re just starting out in the analytics game, it may be tempting to ramp up a state-of-the-art program. But it’s actually more important to get some early wins by capturing the low-hanging fruit.

    “If possible, start with a larger problem that can be easily split into sub projects,” said Dole. “For instance, if you decide to focus on customer understanding, start with scientific customer segmentation. That way, once you know who your customers are, you can start to solve other problems that would require that understanding as a foundation anyway, whether it’s identifying opportunities for cross-sell and upsell, predicting and preventing churn, or forecasting customer lifetime value. These quick wins can typically be achieved within 12 weeks.”

    Set the proper expectations

    It can be incredibly tempting to hype the potential payoff of analytics, but overselling it can result in the C-suite viewing outcomes as failures when they would otherwise be considered wins.

    “It may be a few month or two before any snippets of insight can be garnered, so it’s important that they are patient during the process,” said Dole. “A lot of what a data scientist is doing is identifying, collecting and compiling clean data into usable formats, and this can often take up to 60 percent of their time. Make sure they understand that a properly structured analytics project typically provides as much as a 13x ROI. There are many steps to achieving this, and everyone needs to be aligned on the ultimate goal.”

    Above all, you should keep it simple stupid. It’s all too easy for a data scientist to get bogged down into technical jargon and respond to questions with arcane answers.

    “Use rich visualizations when possible because it’s much easier to understand a graphic than an equation or complex model,” said Dole. “Remove as much of the math and science as possible and just focus on the insights and the value that it’s going to create as well as all of the potential to expand upon it.”

    Information Management, 2016, Simon Owens

  • Insights from Dresner Advisory Services’ 2016 The Internet of Things and Business Intelligence Market Study

    • Sales and strategic planning teams see IoT as the most valuable.
    • IoT advocates are 3X as likely to consider big data critical to the success of their initiatives & programs.
    • Amazon and Cloudera are the highest ranked big data distributions followed by Hortonworks and Map/R.
    • Apache Spark MLib is the most known technology on the nascent machine learning landscape today.

    These and many other excellent insights are from Dresner Advisory Services’ 2016 The Internet of Things and Business Intelligence Market Study published last month. What makes this study noteworthy is the depth of analysis and insights the Dresner analyst team delivers regarding the intersection of big data and the Internet of Things (IoT), big data adoption, analytics, and big data distributions. The report also provides an analysis of Cloud Business Intelligence (BI) feature requirements, architecture, and security insights. IoT adoption is thoroughly covered in the study, with a key finding being that large organizations or enterprises are the strongest catalyst of IoT adoption and use. Mature BI programs are also strong advocates or adopters of IoT and as a result experience greater BI success. IoT advocates are defined as those respondents that rated IoT as either critical or very important to their initiatives and strategies.

    Key takeaways of the study include the following:

    • Sales and strategic planning see IoT as the most valuable today.The combined rankings of IoT as critical and very important are highest for sales, strategic planning and the Business Intelligence (BI) Competency Centers. Sales ranking IoT so highly is indicative of how a wide spectrum of companies, from start-ups to large-scale enterprises, is attempting to launch business models and derive revenue from IoT. Strategic planning’s prioritization of IoT is also driven by a long-term focus on how to capitalize on the technology’s inherent strengths in providing greater contextual intelligence, insight, and potential data-as-a-service business models.


    • Biotechnology, consulting, and advertising are the industries that believe IoT is the most important to their industries.Adoption of IoT across a wide variety of industries is happening today, with significant results being delivered in manufacturing, distribution including asset management, logistics, supply chain management, and marketing. The study found that the majority of industries see IoT as not important today, with the exception of biotechnology.


    • Location intelligence, mobile device support, in-memory analysis, and integration with operational systems are the four areas that most differentiate IoT advocates’ interests and focus.Compared to the overall sample of respondents, IoT advocates have significantly more in-depth areas of focus than the broader respondent base. The four areas of location intelligence, mobile device support, in-memory analysis, and integration with operational systems show they have a practical, pragmatic mindset regarding how IoT can contribute greater process efficiency, revenue and integrate with existing systems effectively.


    • An organization’s ability to manage big data analytics is critically important to their success or failure with IoT. IoT advocates are 3X as likely to consider big data critical, and 2X as likely to consider big data very important. The study also found that IoT advocates see IoT as a core justification for investing in and implementing big data analytics and architectures.


    • Data warehouse optimization, customer/social analysis, and IoT are the top three big data uses cases organizations are pursuing today according to the study. Data warehouse optimization is considered critical or very important to 50% of respondents, making this use case the most dominant in the study. Large-scale organizations are adopting big data to better aggregate, analyze and take action on the massive amount of data they generate daily to drive better decisions. One of the foundational findings of the study is that large-scale enterprises are driving the adoption of IoT, which is consistent with the use case analysis provided in the graphic below.


    • IoT advocates are significantly above average in their use of advanced and predictive analytics today. The group of IoT advocates identified in the survey is 50% more likely to be current users of advanced and predictive analytics apps as well. The study also found that advanced analytics users tend to be the most sophisticated and confident BI audience in an organization and see IoT data as ideal for interpretation using advanced analytics apps and techniques.


    • Business intelligence experts, business analysts and statisticians/data scientists are the greatest early adopters of advanced and predictive analytics. More than 60% of each of these three groups of professionals is using analytics often, which could be interpreted as more than 50% of their working time.


    • Relational database support, open client connectors (ODBC, JDBC) and automatic upgrades are the three most important architectural features for cloud BI apps today. Connectors and integration options for on-premises applications and data (ERP, CRM, and SCM) are considered more important than cloud application and database connection options. Multitenancy is considered unimportant to the majority of respondents. One factor contributing to the unimportance of multi-tenancy is the assumption that this is managed as part of the enterprise cloud platform.


    • MapReduce and Spark are the two most known and important big data infrastructure technologies according to respondents today. 48% believe that MapReduce is important and 42% believe Spark is. The study also found that all other categories of big data infrastructure are considered less important as the graphic below illustrates.


     Forbes, 4 oktober 2016

  • Integration Will Accelerate Internet Of Things, Industrial Analytics Growth In 2017

    • internet-of-things-cityscape-graphic-hqEnabling real-time integration across on-premise and cloud platforms often involves integrating SAP, Salesforce, third-party and legacy systems. 2017 will be a break-out year for real-time integration between SAP, Salesforce, and third party systems in support of Internet of Things and Industrial Analytics.
    • McKinsey Global Institute predicts that the Internet of Things (IoT) will generate up to $11T in value to the global economy by 2025
    • Predictive and prescriptive maintenance of machines (79%), customer/marketing related analytics (77%) and analysis of product usage in the field (76%) are the top three applications of Industrial Analytics in the next 1 to 3 years.

    Real-Time Integration Is the Cornerstone Of Industrial Analytics

    Industrial Analytics (IA) describes the collection, analysis and usage of data generated in industrial operations and throughout the entire product lifecycle, applicable to any company that is manufacturing and selling physical products. It involves traditional methods of data capture and statistical modeling. Enabling legacy, third-party and Salesforce, SAP integration is one of the most foundational technologies that Industrial Analytics relies on today and will in the future. Real-time integration is essential for enabling connectivity between Internet of Things (IoT) devices, in addition to enabling improved methods for analyzing and interpreting data. One of the most innovative companies in this area is enosiX, a leading global provider of Salesforce and SAP integration applications and solutions. They’re an interesting startup to watch and have successfully deployed their integration solutions at Bunn, Techtronic Industries, YETI Coolers and other leading companies globally.

    A study has recently been published that highlights just how foundational integration will be to Industrial Analytics and IoT. You can download the Industrial-Analytics-Report-2016-2017.pdf. This study was initiated and governed by the Digital Analytics Association e.V. Germany (DAAG), which runs a professional working group on the topic of Industrial Analytics. Research firm IoT Analytics GmbH was selected to conduct the study. Interviews with 151 analytics professionals and decision-makers in industrial companies were completed as part of the study. Hewlett-Packard Enterprise, data science service companies Comma Soft and Kiana Systems sponsored the research. All research and analysis related steps required for the study including interviewing respondents, data gathering, data analysis and interpretation, were conducted by IoT Analytics GmbH. Please see page 52 of the study for the methodology.

    Key Takeaways:

    • With real-time integration, organizations will be able to Increase revenue (33.1%), increase customer satisfaction (22.1%) and increase product quality (11%) using Industrial Analytics. The majority of industrial organizations see Industrial Analytics as a catalyst for future revenue growth, not primarily as a means of cost reduction. Upgrading existing products, changing the business model of existing products, and creating new business models are three typical approaches companies are taking to generate revenue from Industrial Analytics. Integration is the fuel that will drive Industrial Analytics in 2017 and beyond.


    • For many manufacturers, the more pervasive their real-time SAP integration is, the more effective their IoT and Industrial Analytics strategies will be. Manufacturers adopting this approach to integration and enabling Industrial Analytics through their operations will be able to attain predictive and prescriptive maintenance of their product machines (79%). This area of preventative maintenance is the most important application of Industrial Analytics in the next 1 – 3 years. Customer/marketing-related analytics (77%) and analysis of product usage in the field (76%) are the second- and third-most important. The following graphic provides an overview of the 13 most important applications of Industrial Analytics.


    • 68% of decision-makers have a company-wide data analytics strategy, 46% have a dedicated organizational unit and only 30% have completed actual projects, further underscoring the enabling role of integration in their analytics and IoT strategies. The study found that out of the remaining 70% of industrial organizations, the majority of firms have ongoing projects in the prototyping phase.
    • Business Intelligence (BI) tools, Predictive Analytics tools and Advanced Analytics Platforms will be pivotal to enabling industrial data analysis in the next five years. Business Intelligence Tools such as SAP Business Objects will increase in importance to industrial manufacturing leaders from 39% to 77% in the next five years. Predictive Analytics tools such as HPE Haven Predictive Analytics will increase from 32% to 69%. The role of spreadsheets used for industrial data analytics is expected to decline (i.e., 27% think it is important in 5 years vs. 54% today).


    • The Industrial Analytics technology stack is designed to scale based on the integration of legacy systems, industrial automation apps and systems, MES and SCADA systems integration combined with sensor-based data. IoT Analytics GmbH defines the technology stack based on four components inclouding data sources, necessary infrastructure, analytics tools, and applications. The following graphic illustrates the technology stack and underscores how essential integration is to the vision of Industrial Analytics being realized.


    • Industrial Internet of Things (IIoT) and Industry 4.0 will rely on real-time integration to enable an era of shop-floor smart sensors that can make autonomous decisions and trade-offs regarding manufacturing execution. IoT Analytics GmbH predicts this will lead to smart processes and smart products that communicate within production environments and learn from their decisions, improving performance over time. The study suggests that Manufacturing Execution System (MES) agents will be vertically integrated into higher level enterprise planning and product change management processes so that these organizations can synchronously orchestrate the flow of data, rather than go through each layer individually.


    Source: business2community.com, 19 december 2016

  • Is Predictive Analytics the future of BI? Or is it something totally different

    Predictive analyticsM. Zaman already stated a view years ago: the market is witnessing an unprecedented shift in business intelligence (BI), largely because of technological innovation and increasing business needs. The latest shift inthe BI market is the move from traditional analytics to predictive analytics. Although predictive analytics belongs to the BI family, it is emerging as a distinct new software sector.

    We can ask ourselves if predictive analytics is a new variant on BI or if it is something new. In essence it is important to understand both the commonalities as the differentialities. Accordingly, and as a consequence, it is a matter of definition whether Predictive Analytics belongs to the BI family or not. Let's focus on the differences.

    Traditional analytical tools claim to have a real 360° view of the enterprise or business, but they analyze only historical data—data about what has already happened. Traditional analytics help gain insight for what was right and what went wrong in decision-making.

    However, past and present insight and trend information are not enough to be competitive in business. Business organizations need to know more about the future, and in particular, about future trends, patterns, and customer behavior in order to understand the market better

    Predictive analytics are used to determine the probable future outcome of an event or the likelihood of a situation occurring. It is the branch of data mining concerned with the prediction of future probabilities and trends. Predictive analytics is used to automatically analyze large amounts of data with different variables; it includes clustering, decision trees, market basket analysis, regression modeling, neural nets, genetic algorithms, text mining, hypothesis testing, decision analytics, and more. A lot of techniques which are not common use in BI.

    The core element of predictive analytics is the predictor, a variable that can be measured for an individual or entity to predict future behavior. Multiple predictors are combined into a predictive model, which, when subjected to analysis, can be used to forecast future probabilities with an acceptable level of reliability. In predictive modeling, data is collected, a statistical model is formulated, predictions are made, and the model is validated (or revised) as additional data become available.

    Predictive analytics combine business knowledge and statistical analytical techniques to apply with business data to achieve insights. These insights help organizations understand how people behave as customers, buyers, sellers, distributors, etc.

    Multiple related predictive models can produce good insights to make strategic company decisions, like where to explore new markets, acquisitions, and retentions; find up-selling and cross-selling opportunities; and discovering areas that can improve security and fraud detection. Predictive analytics indicates not only what to do, but also how and when to do it, and to explain what-if scenarios.

  • Key differences between Business Intelligence and Data Science

    Key differences between Business Intelligence and Data Science

    Cloud computing and other technological advances have made organizations focus more on the future rather than analyze the reports of past data. To gain a competitive business advantage, companies have started combining and transforming data, which forms part of the real data science.

    At the same time, they are also carrying out Business Intelligence (BI) activities, such as creating charts, reports or graphs and using the data. Although there are great differences between the two sets of activities, they are equally important and complement each other well.

    Cloud computing and other technological advances have made organizations focus more on the future rather than analyze the reports of past data. To gain a competitive business advantage, companies have started combining and transforming data, which forms part of the real data science.

    At the same time, they are also carrying out Business Intelligence (BI) activities, such as creating charts, reports or graphs and using the data. Although there are great differences between the two sets of activities, they are equally important and complement each other well.

    For executing the BI functions and data science activities, most companies have professionally dedicated BI analysts as well as data scientists. However, it is here that companies often confuse the two without realizing that these two roles require different expertise.

    It is unfair to expect a BI analyst to be able to make accurate forecasts for the business. It could even spell disaster for any business. By studying the major differences between BI and real data science, you can choose the right candidate for the right tasks in your enterprise.

    Area of Focus

    On the one hand, traditional BI involves generating dashboards for historic data display according to a fixed set of key performance metrics, agreed upon by the business. Therefore, BI relies more on reports, current trends, and Key Performance Indicators (KPIs).

    On the other hand, real data science focuses more on predicting what might eventually happen in the future. Data scientists are thus more focused on studying the patterns and various models and establishing correlations for business forecasts.

    For example, corporate training companies may have to predict the growing need for new types of training based on the existing patterns and demands from corporate companies.

    Data Analysis and Quality

    BI requires concerned analysts to look at the data backwards, namely the historical data, and so their analysis is more retrospective. It demands the data to be absolutely accurate, since it is based on what actually occurred in the past.

    For example, the quarterly results of a company are generated from actual data reported for business done over the last three months. There is no scope for error as the reporting is descriptive, without being judgmental.

    With regard to data science, data scientists are required to make use of predictive and prescriptive analyses. They have to come up with reasonably accurate predictions about what must happen in the future, using probabilities and confidence levels.

    This is not guesswork, as the company will execute the necessary steps or improvement measures based on the predictive analysis and future projections. It is clear that data science cannot be 100% accurate; however, it is required to be “good enough” for the business to take timely decisions and actions to deliver the requisite results.

    An ideal example of data science is estimating the business revenue generation of your company for the next quarter.

    Data Sources and Transformation

    With BI, companies require advanced planning and preparations for using the right combination of data sources to achieve the data transformation. To get appropriate data insights about customers, business operations and products, data science is able to create data transformations on the fly, using data sources available on demand.

    Need for Mitigation

    BI analysts do not have to mitigate any uncertainty surrounding the historical data, since they are based on actual occurrences and accurate and do not involve any probabilities.

    For real data science, there is a need to mitigate the uncertainty in the data. For this purpose, data scientists use various analytic and visualization techniques to identify any uncertainties in the data. They eventually use appropriate data transformation techniques to convert the data into a format that is workable and approximate, which helps to get the data into a format that can be easily combined with other data sources.


    As you cannot get the data transformation done instantly with BI, it is a slow manual process involving plenty of pre-planning and comparisons. It needs to be repeated monthly, quarterly or annually and it is thus not reusable.

    Yet, the real data science process involves creating instant data transformations via predictive apps that trigger future predictions based on certain data combinations. This is clearly a fast process, involving a lot of experimentation.

    Whether you need reports over the last five years or future business models, BI and real data science are necessary for any business. By knowing the difference, you can make better decisions that will lead to business success.

    Author: Brigg Patten

    Source: Smart Data Collective

  • Making Content Marketing Work

    Making Content Marketing Work

    Speeds & feeds. “Hero” shots. Print ads. Product placement. Really expensive TV advertisements featuring celebrity endorsements.

    Pitching a product and service back when those phrases dominated marketing and advertising discussions seems very quaint today.

    In an era where the incumbent media companies are seeing their audiences fragment across a host of different devices and online sites (including the online versions of the incumbent media providers), those old school techniques are losing their juice.

    Consumers no longer want a spec sheet or product description that tells them what the product or service is — they want to be shown what the product or service can do for them. And they want to see how other actual people — just like them — use the product or service.

    As if that wasn’t tough enough, today’s consumers can spot inauthentic pitches from a mile away. They will happily share your lack of authenticity with millions of their closest friends on Facebook, via Twitter etc., etc. and etc.

    Content marketing has emerged in the past three years as a practice that allows marketers to maintain the balance between richer, deeper information, or content, about their products and doing it authentically.

    Like so many things in life, describing what content marketing is, and what it can accomplish, is way easier than actually doing content marketing successfully.

    In one of Gartner’s earlier docs on content marketing, my colleague Jake Sorofman exhorted marketers to “think like publishers.” Sound advice but many marketers find that to be difficult. To-date, while many marketers are getting much better at sourcing and distributing the kind of content elements for their needs, measuring content marketing’s contribution is not easy. But it can be done.

    Using content analytics gives content marketers insight into how their efforts are being received by consumers, providing the kind of objective measures that previous generations of marketers dreamed of having. Jake’s most research round-up on content marketing has some timely examples of companies which have wrestled with the content marketing challenge and are realizing the value of not merely finding, creating and distributing content, they’re also focusing on using all the tools available to amplify their efforts. The story about IKEA’s work in the area is particularly interesting.

    Yep, times have changed and it’s a much more complex field than marketing used to be. Digital, content, social, mobile marketers are jobs titles that didn’t exist 15 years ago, for the most part. The good news is that the tools and techniques those new job titles require are increasingly available.

    By Mike McGuire | April 6, 2015 |

  • Overcoming data challenges in the financial services sector  

    Overcoming data challenges in the financial services sector

    Importance of the financial services sector

    Financial services industry plays a significant role in global economic growth and development. The sector contributes to the creation of amore efficient flow management of savings and investments and enhance risk management of financial transaction activities for products and services. Institutions such as commercial and investment banks, insurance companies, non-banking financial companies, credit and loan companies, brokerage firms, trust companies offer a wide range of financial services and distribute them in the marketplace. Some of the most common financial services are credits, loans, insurances and leases, distributed directly by insurance companies and banks, or indirectly via agents and brokers.

    Limitations and challenges in data availability

    Due to the important role of financial services in the global economy, it is expected that the financial services market is professional and highly developed, also in terms of data availability. Specifically, a well-designed database is expected to be available, where a wide range of information is presented and can be collected regarding the certain industries. However, reality does not meet these expectations.

    Through assessments of various financial service markets, it has been observed that data collection is a challenging process. Several causes contribute to this situation. Lack of data availability or poor data availability, data opacity, consolidated information from market or annual reports, as well as different categorization schemes of financial services are some of the most significant barriers. Differences in the legal framework among countries have a major impact on the entry and categorization of data. A representative example which applies in this case, is the different classification schemes and categorization of financial services across countries. Specifically, EU countries are obligated to publish data of financial service lines under certain classification scheme and pre-defined classes, which in many cases, differs from the classification schemes or classes of non-EU countries, contributing to an unclear, inaccurate overview of the market. The identification and understanding of each classification scheme are necessary to avoid double counting and overlapped data. In addition, public institutions often publish data, revealing part of the market and not presenting the actual market sizes. Lastly, it has also been observed that some financial services have different definition across countries, which influences the complexity of the data collection and assessment of the financial services market.

    Need for a predictive model

    In order to overcome the challenges of data inconsistency and poor, limited or non-existent data availability and to create an accurate estimation of the financial services market, it is necessary to develop a predictive model which analyzes a wide range of indicators. A characteristic example is the estimation of the global financial services market conducted by The World Bank. An analysis model, based on both derived and measured data information, was created, to address limited data inputs challenges.

    An analysis model for the assessment of the financial services markets, created by Hammer, takes into consideration both, collection of qualitative and quantitative data from several sources as wells as predictive indicators. In previous assessment of the certain financial services markets, data information was collected by publications, articles, reports from public financial services research institutions, country’s financial services associations and association groups and private financial services companies. Field’s experts opinion also constituted a significant source of information. The model included regression and principal component analysis, where derived data were produced based on certain macroeconomic factors (such as country population, GDP, GDP per sector, unemployment rate), trade indicators, economic and political factors.  

    The selection of the indicators and analysis model depends on the type of the financial service product and relative market that we want to assess. In addition, based on model analysis, it is possible to identify and validate correlations between a set of predictive indicators that have been considered as potential key drivers of the specific markets. To conclude with, it is possible to identify the sizes of the financial services markets, with the support of an advanced predictive analysis model which can enable and enhance comparability and consistency of data across different markets and countries.

    Author: Vasiliki Kamilaraki

    Source: Hammer, Market Intelligence

  • Predicting Student Success In The Digital Era

    Predicting Student Success In The Digital Era

    I had the pleasure of moderating a webinar focusing on the work of two Pivotal data scientists working with a prestigious mid-west university to use data to predict student success.It’s a topic that has long interested me as I devoted a good deal of time trying to promote this type of project in the early 2000’s.

    That could reflect on my skills as a salesman but on consideration it also illustrates how fast and how far our big data technologies have brought us. So after hosting the webinar (which I recommend for your viewing, you can see it here) I did a quick literature search and was gratified to see that in fact many colleges and universities are undertaking these studies.

    One thing stood out just by examining the dates of the published literature.Prior to about 2007 what predictive analytics was performed tended to focus on the sort of static data you can find in a student’s record: high school GPA, SAT scores, demographics, types of preparatory classes taken, last term’s grades, and the like.That’s pretty much all there was to draw on and there was some success in that period.

    What changes in the most current studies is the extensive use of unstructured data integrated with structured data. It wasn’t until about 2007 that our ability to store and analyze unstructured data took off and now we have data from a variety of new sources.

    Learning Management Systems 

    One of the most important new sources. These are the on line systems used to interact with students outside of the classroom. From these we can learm for example when they submitted assignments relative to the deadline, how they interact with instructors and classmates in the chat rooms, and a variety of click stream data from library sites and the like.

    Sensor and Wi-Fi data 

    Show frequency and duration on campus or at specific locations like the library.

    Student Information Systems 

    These aren’t necessarily new but greatly improved in level of detail regarding classes enrolled and completed with regular grade markers.

    Social Media 

    What is standard now in commerce is becoming a tool for assessment of progress or markers for concern. Positive and negative social media comments are evaluated for sentiment and processed as streaming data that can be associated with specific periods in a student’s term or passage through to graduation.

    The goals of each study are slightly different.Some are seeking better first year integration programs which are so important in student long term success. Some are focused on the transition from Community College to four year institution. But universally they tend to look at some similar markers that would allow counsellors and instructors to intervene. Some of those common markers are:

    • Predicting first term GPA.

    • Predicting specific course grades.

    • Predicting reenrollment.

    • Predicting graduation likelihood, some focused on getting students through in four years, others getting them through at all.

    As in any data science project, each institution seems to have identified its own unique set of features drawn from both the traditional structured and new unstructured data sources. Paul Gore who headed one of these studies at the University of Utah had a nice summary of the categories that’s worth considering. He says the broad categories of predictive variables fall into these six groups:

    Measures of academic performance:

    Academic engagement 

    Also known as academic conscientiousness: in other words, how seriously does the student take the business of being a student? Does the student turn in assignments on time? Attend class diligently? Ask for help when needed?

    Academic efficacy

    The student's belief and confidence in their ability to achieve key academic milestones (such as the confidence to complete a research paper with a high degree of quality, or to complete the core classes with a B average or better, or their confidence in their ability to choose a major that will be right for them).

    Measures of academic persistence:

    Educational commitment

    This refers to a student's level of understanding of why they are in college. Students with a high level of educational commitment are not just attending college because it is "what I do next" after high school (i.e., in order to attain a job or increase their quality of life); these students have a more complex understanding of the benefits of their higher education and are more likely to resist threats to their academic persistence.

    Campus engagement

    This is the intent or desire to become involved in extracurricular activities. Does the student show interest in taking a leadership role in a student organization, or participating in service learning opportunities, intramural sports, or other programs outside of the classroom?

    Measures of emotional intelligence:


    How well does the student respond to stress? Do small setbacks throw the student "off track" emotionally, or are they able to draw on their support network and their own coping skills to manage that stress and proceed toward their goals?

    Social comfort

    Gore notes that "social comfort is related to student outcomes in a quadratic way -- a little bit of social comfort is a good thing, while a lot may be less likely to serve a student well, as this may distract their attention from academic and co-curricular pursuits." (aka too much partying).

    Where the studies were willing to share, the fitness measures of the predictive models look pretty good, achieving classification success rates in the 70% to 80% range.

    From our data scientist friends at Pivotal who are featured in the webinar we also learn that administrators and counsellors are generally positive about the new risk indicators. There was always the possibility that implementation might be hampered by disbelief but there are some notable examples where there is good acceptance.

    Some of the technical details are also interesting. For example, there are instances where the models are being run monthly to update the risk scores. This allows the college to act within the current term and not wait for the term to be over, which might be too late.

    And there are examples in which the data is being consumed not only by administrators and counsellors but also being pushed directly to the students through mobile apps.

    I originally thought to include a listing of the colleges that were undertaking similar projects but a Google search shows that there are a sufficiently large number that this is no longer a completely rare phenomenon. In its early stages to be sure but not rare.

    Finally I was struck by one phenomenon that is not meant as a criticism, just an observation. Where the research and operationalization of the models was funded by say a three year grant, it took three years to complete the project. But where our friends at Pivotal were embraced by their client, four data scientists, two from Pivotal and two from the university had it up and running in three months. Just saying...

    Author: William Vorhies

    Source: Data Science central

  • Predictive analytics in customer surveys: closing the gap between data and action

    Schermafbeelding 2018 01 24 om 10.02.51Customer surveys are a conduit to the voice of the customer (VoC). However, simply capturing survey data is no longer enough to achieve better results.

    When used appropriately, customer surveys can help companies more effectively identify new markets with the most potential for success, create a data-driven pricing strategy, and gauge customer satisfaction. However, capturing survey data is only the first step.

    Companies must analyze and act on survey data to achieve their goals. This is where predictive analytics comes into the picture. As illustrated in Figure 1, companies using predictive analytics to process survey data achieve far superior results across several key performance indicators (KPIs), compared to those without this technology. 

    Since happy customers are more likely to maintain or increase, their spend with a business, growth in customer lifetime value among predictive analytics users signals improvement in customer satisfaction rates. Similarly, companies using this technology also attain 4.6 times the annual increase in overall sales team attainment of quota, compared to non-users. This correlation indicates that predictive analytics can help companies convert survey data into top-line revenue growth.

    Use of predictive analytics to forecast and predict the likelihood of certain events, such as potential sales or changes in customer satisfaction, requires companies to have a comprehensive view of customer and operational data. Most organizations don’t struggle with a lack of survey data given the wealth of insights they glean through the activities noted above. Instead, they are challenged with putting this data to good use. Indeed, findings from Aberdeen’s May 2016 study, CEM Executive's Agenda 2016: Aligning the Business Around the Customer, show that only 15% of companies are fully satisfied with their ability to use survey data in customer experience programs.

    How to Use Predictive Analytics to Maximize Your Performance

    Data shows that Best-in-Class firms (see sidebar) are 20% more likely to be fully satisfied with their use of survey data when conducting customer conversations. A closer look at these organizations reveals that they have 59% greater adoption rate when it comes to predictive analytics, compared to All Others (35% vs. 22%).

    For any organization not currently using predictive analytics to analyze survey data, this technology holds the key to significant performance improvements. As such, we see that with a mere 35% adoption rate, many top performers could use predictive analytics to do even better.

    One mistake companies make when adopting new technologies is assuming that simply deploying the technology will result in sudden – and recurring – performance improvements. The

    situation is no different with predictive analytics. The fact of the matter is, if an organization is looking to increase customer lifetime value or profit margins, the organization must design and execute a well-crafted strategy for utilizing predictive analytics in conjunction with customer surveys.

    On a high level, predictive analytics can be used in two ways:

    1. Systematic analysis: Organizations can establish an analytics program to measure and manage survey data on a regular basis. These programs are aimed at accomplishing certain goals, such as gauging customer satisfaction levels at regular intervals to correlate changes in customer satisfaction rates with changes in the marketplace and overall business activities.

    2. Ad-hoc analysis: Companies can also analyze survey data on an as-needed basis. For example, a company could conduct a one-time analysis of the potential customer spend in a new market to decide whether to enter that market.

    It’s important to note that companies can use both systematic and ad-hoc analysis. Use of systematic analysis allows organizations to continuously monitor their progress towards ongoing performance goals, such as improving customer satisfaction. Ad- hoc analysis, on the other hand, allows companies to use the same analytical capabilities to answer specific questions that may arise.

    Having outlined the two general ways companies use predictive analytics, it’s also important to share the two general types of processes that can be used to produce such analysis:

    1. Statistical analysis: Predictive analytics can provide decision maker across the business with insights into hidden trends and correlations. For example, companies conducting statistical analysis can identify how use of certain customer interaction channels (e.g. web, email, or social media) correlates with customer satisfaction rates as revealed through surveys. This, in turn, allows companies to identify which channels work best in meeting (and exceeding) the needs of target clientele.

    2. Modeling: This second type breaks into two sub-categories:

    1. Forecasting: Companies can use historical and real- time survey data to forecast the likelihood of certain outcomes. For example, a company curious about the potential sales uplift to be expected from a new market would survey potential buyers in the area and ask about their intent to buy and preferred price-points. The forecasting capability of their predictive analytics platform would then allow the company to forecast potential sales numbers.

    2. Predicting: This analysis refers to analyzing historical and real-time survey data to estimate a specific result that might have already happened, might happen currently or will happen in the future. For example, an organization might decide to build a model that helps identify customer spend in a specific market. This might start by developing a model for past sales results where the model produces a result similar to the actual results observed by the company. Having ensured the accuracy of the model, the organization can now use it to predict current and future sales based on changes in the factors built into the same predictive model.

      The difference between forecasting and predicting is that the former only looks at future events or values whereas the latter can look at future, current or historical events when building models. Also, the former requires relying on already available past data (e.g. snow blower purchases) to make forecasts whereas the latter allows companies to predict a certain outcome, in this case snow blower purchases by looking at related factors influencing this result, including recent temperatures, change in average income, and others.



    Companies have many ways to capture survey data, however only 15% are fully satisfied in their ability to use this data. Predictive analytics helps companies alleviate this challenge by answering business questions designed to improve performance results.

    However, it’s imperative to remember that the statistical insights gleaned through predictive analytics, as well as the models predictive analytics can produce, will only yield results if companies act on the intelligence thus acquired. Don’t overlook the importance of coupling analysis and action. If you are planning to invest in this technology (or have already invested but seek to improve your results), we recommend that you make bridging the gap between data and action a key priority for your business. 

    Author: Omer Minkara

    Source: white paper Aberdeen Group (sponsored by IBM)

  • Predictive analytics in the futures trading market

    Predictive analytics in the futures trading market

    Future traders need to always be on their game. They should use the latest technology to have a competitive edge in the market.

    New technology has always been at the forefront of the financial industry. Computers have always played an important role in improving the efficiency of financial markets. In many ways, this has been beneficial. In other ways, it has created new risks. The emergence of flash trading is a prime example of a new technology that hurts short term traders. This is even more of an issue with futures markets.

    The same can be said about predictive analytics. Aishwarya Singh from Analytics Vidyha points out that new advances in predictive analytics technology are reshaping financial trading. Investors that trade futures and other derivative investments are becoming more reliant on predictive analytics.

    The ultimate question is whether predictive analytics is going to be a net positive or negative for the industry. There are a number of factors that need to be taken into consideration to answer this question.

    What are the potential benefits of predictive analytics in futures trading?

    Since the first future exchange was launched, investors have been looking for ways to predict financial asset prices. It was always understood that somebody that could predict future price movements would have a stellar advantage and ultimately become filthy rich. Of course, it is not feasible for everybody to become a millionaire through the future market.

    The nice thing about the future market is that it is not a zero-sum game. It is possible for everybody to make a positive return over the long run. This is one of the biggest things that differentiates future and bond investing from gambling. Since predictive analytics technology is more available, many experts point out that predicting the market is easier than you may think.

    However, financial markets are also highly efficient. As people become more adept at predicting price movements, the marginal returns become smaller. Therefore, it is not possible for predictive analytics to make every investor wealthy. It is still important to understand the market yourself.

    Nevertheless, everybody that relies on predictive analytics has the opportunity to enrich themselves to some degree. The bad news is that the average investor won’t have access to exceptional predictive analytics capabilities. However, they can invest in an institutional account that does have these resources. Investors that pool their money with mutual funds or retirement accounts that use sophisticated predictive analytics technology will have the opportunity to realize higher rates of return in the future.

    What are the risks of predictive analytics in future investing?

    Machine learning has been a boon for many industries. However, the financial industry is one where it also introduces certain risks.

    One of the biggest risks is that predictive analytics could create the next major financial bubble. Certain assets could be perceived to be undervalued, which would cause people to start purchasing the futures. The problem is that if the fundamentals are inaccurate, this could lead to lots of people over paying for lousy investments.

    The crux of the problem is that some of the variables that go into predicting asset values are unclear. For one thing, the relationship between future profits and asset value is not nearly as strong as traditional financial theorists would like to believe. This is one of the reasons that the dividend discount model and even the free cash flow model are not nearly as reliable in the real world as MBA professors suggest.

    The other problem is that we live in a very dynamic world. Some of the variables that had a profound impact on company profitability a few years ago might no longer be relevant. There are tens of thousands of different variables that go into determining the value of any given us sad. Even the most advanced machine learning system might not be able to pick up on all of them when they are changing every day.

    The value of predictive analytics tools is going to depend on their ability to pick up on vibrant changes in the market. Until they have proven their ability to do so, we might need to take their predictions with a grain of salt.

    Author: Rehan Ijaz

    Source: SmartDataCollective

  • Predictive Analytics: Maximizing visitors while preserving nature?  

    Predictive Analytics: Maximizing visitors while preserving nature?

    The last years have shown that more and more people visit the Veluwe. The region is well known for its diversity: you can go for a peaceful hike, cultural activity or bike tour. The leisure possibilities are endless, tourism is booming. In fact, the region is becoming so popular that the tourism sector is changing its goal from attracting visitors to the Veluwe to managing crowdedness in the Veluwe region.

    This development requires an active change in mindset for VisitVeluwe, the responsible party for organizing tourism in this region. Their goal is to spread visitors over different parts of the region, as well as spreading visitors more equally over time. This spreading of visitors has two main benefits. On the one hand, it reduces over-crowdedness. On the other hand, it maximizes visitors to the Veluwe without causing much damage to the flora and fauna. This second part should not be forgotten, since allowing many people to enjoy the Veluwe continues to be important for VisitVeluwe.

    “There are places where the number of visitors has a negative effect on nature, the quality of life of residents and the experience of visitors. On the other hand, there are also places and times when that is not a problem. We want to get the balance right. To achieve this, it is important to spread the visitors in time and space. It helps enormously to be able to predict how busy certain area’s will be so that we can anticipate.” - Pim Nouwens, VisitVeluwe

    ‍However, managing the crowdedness of an area as large and diverse as the Veluwe is no easy task. VisitVeluwe has already taken several steps towards getting an insight into the crowdedness in the Veluwe region. A clear example is the already running Crowd Monitor, which helps visitors and managing parties understand in which places and at what times over-crowdedness is most critical.

    Yet, the Crowd Monitor is still a reactive model, which will only get them so far in tackling the issue. The next step is being able to take preventive measures based on a pro-active model. In order to do so, they would need to understand where over-crowdedness will become an issue before it actually does. A pro-active model offers multiple benefits. Firstly, if VisitVeluwe have an understanding of the locations where crowdedness might occur in the near future, it can help them in managing the visitor streams towards other locations on the Veluwe. Next to that, a predictive crowdedness model can also help visitors in deciding on when to visit a certain place on the Veluwe. Not only will this assist in managing the crowdedness, but it might also improve the visitor experience, as most of us prefer a calm environment when visiting a place like the Veluwe. A final benefit of a predictive crowdedness model is that it can help tourist attractions on the Veluwe in understanding how many visitors they can expect and the consequences this brings for them (workforce planning, ticket prices, etc.).

    In order to take these preventive actions, VisitVeluwe and Hammer developed a predictive model based on historical visitor data. The model takes visitor data as basic input and combines it with several predictor variables to ensure accurate location-specific predictions. These predictions can subsequently be used to create a crowdedness timeline, that will help to identify over-crowdedness issues. Due to the large variety of locations on and around the Veluwe, we opted to use a neural network model for this project. The architecture of such a model means that it is location independent and can be trained on any location with available historical visitor data. The nature of the predictive model will allow it to improve over time without much human interaction, aslong as it gathers real-time visitor data. This means that the current project lays a strong foundation for better crowd management solutions in the future and can be integrated into VisitVeluwe's already existing Crowd Monitor system.

    “Together with Hammer, we investigated the possibilities. It was a new and quite complex issue, but I am very satisfied with the approach, skill and cooperation. It looks like we have developed a valuable tool.” - Pim Nouwens, VisitVeluwe

    ‍This predictive model leads to better preservation of the Veluwe, while at the same time allowing as many visitors as possible to enjoy the magnificent experiences the Veluwe region has to offer.

    Source: Hammer, Market Intelligence

  • Predictive modelling in Market Intelligence is hot

    IRCMSTR14533 Global Predictive Analytics Market 500x457

    Market intelligence is nog steeds een functie in bedrijven die onderbelicht is. Hoe vaak hebben bedrijven accuraat en actueel in beeld hoe groot hun markt precies is? En of deze groeit of krimp vertoont?

    B2C bedrijven kunnen tegen aanzienlijke bedragen nog dure rapporten kopen bij de informatiemakelaars van deze wereld. En als ze dan het geluk hebben dat voor hen relevante segmentaties zijn gebruikt kan dat inderdaad wat opleveren. B2B bedrijven hebben een veel grotere uitdaging. Markt data is doorgaans niet commercieel beschikbaar en zal moeten worden geproduceerd (al dan niet met behulp van B2C data). Waarmee markt data voor deze bedrijven eigenlijk nog duurder wordt.

    Bovenstaande discussie gaat bovendien nog slechts om data over de marktomvang en –waarde. De basis zou je kunnen zeggen. Data over concurrenten, marktaandelen, productontwikkelingen en marktbepalende trends is minstens zo relevant om een goede koers te kunnen bepalen maar ook tactische (inkoop, pricing, distributie) beslissingen te kunnen nemen.

    Toch zijn er mogelijkheden! Ook met behulp van schaarse data is het mogelijk marktdata te gaan reconstrueren. Het uitgangspunt: Als we op zoek gaan in die markten waar we wel data hebben naar voorspellende variabelen dan kunnen andere marktdata wellicht worden ‘benaderd’ of ‘geschat’. Een vorm van statistische reconstructie van marktdata die vaak betrouwbaarder blijkt dat dan die van surveys of expert panels. Meer en meer wordt deze techniek toegepast in market intelligence. Dus ook in dit vakgebied doet data science haar intrede.

    Als dit gemeengoed is, is de stap naar het voorspellen van markten natuurlijk niet ver meer weg. Meer en meer wordt die vraag natuurlijk gesteld. Kunnen we ook in kaart brengen hoe de markt er over 5 of misschien zelfs 10 jaar uitziet? Dit kan! En de kwaliteit van die voorspellingen neemt toe. En daarmee het gebruik. Market intelligence wordt er alleen maar leuker van! En het spel om de knikkers natuurlijk alleen maar interessanter.

    Source: Hammer, market intelligence




  • Retailers are using big data for better marketing

    Durjoy-Patranabish-Blueocean-Market-IntelligenceToday, the customers’ expectations are growing by leaps and bounds and the credit goes to the technology that has given ample choices to them. Retailers are leaving no stone unturned to provide better shopping experience by adapting to analytical tools to catch up with the changing expectations of the consumers. Durjoy Patranabish, Senior Vice President, Blueocean Market Intelligence divulged Dataquest about the role of analytics in retail sector. 

    How retailers are using big data analytics to drive real business value?
    The idea of data creating business value is not new; however, the effective use of data is
    becoming the basis of competition. Retailers are using big data analytics to make variety of intelligent decisions to help delight customers and increase sales.

    These decisions range from assessing the market, targeting the right segment, forecasting demand to product planning, and localizing promotions. Advanced analytics
    solutions such as inventory analysis, price point optimization, market basket analysis, cross-sell/ up-sell analytics, real-time sales analytics, etc, can be achieved using
    techniques like clustering, segmentation, and forecasting. Retailers have now realized the importance of big data and are using it to draw useful insights and managing the customer journey.

    How advanced clustering techniques can be used to predict better purchasing behaviors in targeted marketing campaigns?
    Advanced clustering techniques can be used to group customers based on their historical purchase behavior, providing retailers with a better definition of customer segmentation on the basis of similar purchases. The resulting clusters can be used to characterize different customer groups, which enable retailers to advertise and offer promotions to these targeted groups. In addition to characterization, clustering allows retailers to predict the buying patterns of new customers based on the profiles generated. Advanced clustering techniques can build a 3D-model of the clusters based on key business metrics,

    such as orders placed, frequency of orders, items ordered or variation in prices. This business relevance makes it easier for decision makers to identify the problematic clusters that force the retailers to use more resources to attain a targeted outcome. They can then focus their marketing and operational efforts on the right clusters to enable optimum utilization of resources.

    What trends are boosting big data analytics space?

    Some of the trends in the analytics space are:

    „„1. The need for an integrated, scalable, and distributed data store as a single repository will give rise to the growth of data lakes. This will also increase the need for data governance.
    „„2. Cloud-based big data analytics solutions are expected to grow three times more quickly than spending on on-premises solutions.
    „„3. Deep learning which combines machine learning and artificial intelligence to uncover relationships and patterns within various data sources without needing specific models or programming instructions will emerge
    4. „„ The explosion of data coming from the Internet of Things will accelerate real-time and streaming analytics, requiring data scientists to sift through data in search of repeatable patterns that can be developed into event processing models
    „„5. Analytics industry will become data agnostic, primarily having analytics solutions focused around people and machine rather than on structured and unstructured data
    6. „„ Data will become an asset which organizations can monetize by selling or providing value added content.

    What are your views on ‘Big Data for Better Marketing’. How retailers can use analytics tools to be ahead of their competitors?

    Whether it is to provide a smarter shopping experience that influences the purchase decisions of customers to drive additional revenue, or to deliver tailor made relevant real-time offers to customers, big data offers a lot of opportunities for retailers to stay ahead of the competition.

    Personalized Shopping Experience: Data can be analyzed to create detailed customer profiles that can be used for micro-segmentation and offer a personalized shopping experience. A 360 degrees customer view will inform retailers how to best contact their customers and recommend products to them based on their liking and shopping pattern.
    Sentiment analysis can tell retailers how customers perceive their actions, commercials, and products they have on offer. The analysis of what is being said online will provide retailers with additional insights into what customers are really looking for and it will enable retailers to optimize their assortments to local needs and wishes.
    Demand Forecast: Retailers can predict future demand using various data sets such as web browsing patterns, buying patterns, enterprise data, social media sentiment, weather data, news and event information, etc, to predict the next hot items in coming seasons. Using this information, retailers can stock up and deliver the right products
    and the right amount to the right channels and regions. An accurate demand forecast will not only help retailers to optimize their inventory and improve just-in-time delivery but
    also optimize in-store staffing, thus bringing down the cost.
    Innovative Optimization: Customer demand, competitor activity, and relevant news & events can be used to create models that automatically synchronize pricing with inventory levels, demand and the competition. Big data can also enable retailers to optimize floor plans and find revenue optimization possibilities.

    Source: DataQuest

  • Technology advancements: a blessing and a curse for cybersecurity

    Technology advancements: a blessing and a curse for cybersecurity

    With the ever-growing impact of big data, hackers have access to more and more terrifying options. Here's what we can do about it.

    Big data is the lynchpin of new advances in cybersecurity. Unfortunately, predictive analytics and machine learning technology is a double-edged sword for cybersecurity. Hackers are also exploiting this technology, which means that there is a virtual arms race between cybersecurity companies and cybercriminals.

    Datanami has talked about the ways that hackers use big data to coordinate attacks. This should be a wakeup call to anybody that is not adequately prepared.

    Hackers exploit machine learning to avoid detection

    Jathan Sadowski wrote an article in The Guardian a couple years ago on the intersection between big data and cybersecurity. Sadowski said big data is to blame for a growing number of cyberattacks.

    In the evolution of cybercrime, phishing and other email-borne menaces represent increasingly prevalent threats. FireEye claims that email is the launchpad for more than 90% of cyber attacks, while a multitude of other statistics confirm that email is the preferred vector for criminals.

    This is largely because of their knowledge of machine learning. They use machine learning to get a better understanding of customers, choose them them more carefully and penetrate defenses more effectively.

    That being said, people are increasingly aware of things like phishing attacks and most people know that email links and attachments could pose a risk. Many are even on the lookout for suspicious PDFs, compressed archives, camouflaged executables, and Microsoft Office files with dodgy macros inside. Plus, modern anti-malware solutions are quite effective in identifying and stopping these hoaxes in their tracks. The trouble is that big data technology helps these criminals orchestrate more beleivable social engineering attacks.

    Credit card fraud represents another prominent segment of cybercrime, causing bank customers to lose millions of dollars every year. As financial institutions have become familiar with the mechanisms of these stratagems over time, they have refined their procedures to fend off card skimming and other commonplace exploitation vectors. They are developing predictive analytics tools with big data to prepare for threats before they surface.

    The fact that individuals and companies are often prepared for classic phishing and banking fraud schemes has incentivized fraudsters to add extra layers of evasion to their campaigns. The sections below highlight some of the methods used by crooks to hide their misdemeanors from potential victims and automated detection systems.

    Phishing-as-a-Service on the rise, due to big data

    Although phishing campaigns are not new, the way in which many of them are run is changing. Malicious actors used to undertake a lot of tedious work to orchestrate such an attack. In particular, they needed to create complex phishing kits from scratch, launch spam hoaxes that looked trustworthy, and set up or hack websites to host deceptive landing pages. Big data helps hackers understand what factors work best in a phishing attack and replicate it better.

    Such activity required a great deal of technical expertise and resources, which raised the bar for wannabe scammers who were willing to enter this shady business. As a result, in the not-so-distant past, phishing was mostly a prerogative of high-profile attackers.

    However, things have changed, most notably with the popularity of a cybercrime trend known as Phishing-as-a-Service (PHaaS). This refers to a malicious framework providing malefactors with the means to conduct effective fraudulent campaigns with very little effort and at an amazingly low cost.

    In early July, 2019, researchers unearthed a new PHaaS platform that delivers a variety of offensive tools and allows users to conduct full-fledged campaigns while paying inexpensive subscription fees. The monthly prices for this service range from $50 to $80. For an extra fee, a PHaaS service might also include lists of email addresses belonging to people in a certain geographic region. For example, the France package contains about 1.5 million French 'leads' that are 'genuine and verified'.

    The PHaaS product in question lives up to its turnkey promise as it also provides a range of landing page templates. These scam pages mimic the authentic style of popular services such as OneDrive, Adobe, Google, Dropbox, Sharepoint, DocuSign, LinkedIn, and Office 365, to name a few. Moreover, the felonious network saves its 'customers' the trouble of looking for reliable hosting for the landing sites. This feature is already included in the service.

    To top it all off, the platform accommodates sophisticated techniques to make sure the phishing campaigns slip under the radar of machine learning systems and other automated defenses. In this context, it reflects the evasive characteristics of many present-day phishing waves. The common anti-detection quirks are as follows:

    • Content encryption: As a substitute to regular character encoding, this method encrypts content and then applies JavaScript to decrypt the information on the fly when a would-be victim views it in a web browser.
    • HTML character encoding: This trick prevents automated security systems from reading fraudulent data while ensuring that it is rendered properly in an email client or web browser.
    • Inspection blocking: Phishing kits prevent known security bots, AV engines, and various user agents from accessing and crawling the landing pages for analysis purposes.
    • Content injection: In the upshot of this stratagem, a fragment of a legitimate site’s content is substituted with rogue information that lures a visitor to navigate outside of the genuine resource.
    • The use of URLs in email attachments: To obfuscate malicious links, fraudsters embed them within attachments rather than in the email body.
    • Legitimate cloud hosting: Phishing sites can evade the blacklisting trap if they are hosted on reputable cloud services, such as Microsoft Azure. In this case, an additional benefit for the con artists is that their pages use a valid SSL certificate.

    The above evasion tricks enable scammers to perpetrate highly effective, large-scale attacks against both individuals and businesses. The utilization and success of these techniques could help explain a 17% spike in this area of cybercrime during the first quarter of 2019.

    The scourge of card enrollment

    Banking fraud and identity theft go hand in hand. This combination is becoming more harmful and evasive than ever before, with malicious payment card enrollment services gaining momentum in the cybercrime underground. The idea is that the fraudster impersonates a legitimate cardholder in order to access the target’s bank account with virtually no limitations.

    According to security researchers’ latest findings, this particular subject is trending on Russian hacking forums. Threat actors are even providing comprehensive tutorials on card enrollment 'best practices'.

    The scheme starts with the harvesting of Personally Identifiable Information (PII) related to the victim’s payment card, such as the card number, expiration date, CVV code, and cardholder’s full name and address. A common technique used to uncover this data is to inject a card-skimming script into a legitimate ecommerce site. Credit card details can also be found for sale on the dark web making things even easier.

    The next stage involves some extra reconnaissance by means of OSINT (Open Source Intelligence) or shady checking services that may provide additional details about the victim for a small fee. Once the crooks obtain enough data about the individual, they attempt to create an online bank account in the victim’s name (or perform account takeover fraud if the person is already using the bank’s services). Finally, the account access is usually sold to an interested party.

    To stay undetected, criminals leverage remote desktop services and SSH tunnels that cloak the fraud and make it appear that it’s always the same person initiating an e-banking session. This way, the bank isn’t likely to identify an anomaly even when the account is created and used by different people.

    To make fraudulent purchases without being exposed, the hackers also change the billing address within the account settings so that it matches the shipping address they enter on ecommerce sites.

    This cybercrime model is potent enough to wreak havoc in the online banking sector, and security gurus have yet to find an effective way to address it.

    These increasingly sophisticated evasion techniques allow malefactors to mastermind long-running fraud schemes and rake in sizeable profits. Moreover, new dark web services have made it amazingly easy for inexperienced crooks to engage in phishing, e-banking account takeover, and other cybercrimes. Under the circumstances, regular users and organizations should keep hardening their defenses and stay leery of the emerging perils.

    Big data makes hackers a horrifying threat

    Hackers are using big data to perform more terrifying attacks every day. We need to understand the growing threat and continue fortifying our defenses to protect against them.

    Author: Diana Hope

    Source: SmartDataCollective

  • The 6 abilities of the perfect data scientist  

    The 6 abilities of the perfect data scientist

    There are currently over 400K job openings for data scientists on LinkedIn in the U.S. alone. And, every single one of these companies wants to hire that magical unicorn of a data scientist that can do it all.

    What rare skill set should they be looking for? Conor Jensen, RVP of AI Strategy at AI and data analytics provider Dataiku, has boiled it all down to the following: 

    1. Communicates Effectively to Business Users: To let the data tell a story, a data scientist needs to be able to take complex statistics and convey the results persuasively to any audience.

    2. Knows Your Business: A data scientist needs to have an overall understanding of the key challenges in your industry, and consequently, your business. 

    3. Understands Statistical Phenomena: Data scientists must be able to correctly interpret statistics: is a result representative or not? This skill is key, since the majority of stats that are analyzed contain statistical bias that needs correcting.

    4. Makes Efficient Predictions: The data scientist must have a broad knowledge of algorithms to select the right one, which features to adjust to best feed the mode, and how to combine complimentative data. 

    5. Provides Production-Ready Solutions: Today’s data scientists need to provide services that can run daily, on live data. 

    6. Can Work On A Mass Scale: A data scientist must know how to handle multi-terabyte datasets to build a robust model that holds up in production. In practice this means that they need to have a good idea of computation time, what can be done in memory and what requires Hadoop and MapReduce.

    Source: Insidebigdata

  • The Impact of Predictive Analytics on Developments in Mobile Phone Tracking

    The Impact of Predictive Analytics on Developments in Mobile Phone Tracking

    Predictive analytics technology continues to shape the world in surprising new ways. One of the trends that few people have talked about is the role of predictive analytics in the future of mobile phone tracking.

    Mobile phone data has played a huge role in research. They have been especially important in behavioral research, according to a study published with the National Institute of Health.

    The first mobile phone tracking tools were able to make a huge difference in preserving people's state of mind. However, they were limited in the scope of their analytics. People had to draw their own conclusions while monitoring the behavior of someone using a phone. Predictive analytics could change that.

    The Role of Mobile Phone Tracking Before the Emergence of Predictive Analytics and AI

    When you look up to finding a solution to track a person’s cell phone location on the internet, a thousand results would appear before your screen. Questions like how to find a cell phone location and how to track a cell phone location without installing software have been frequently asked on different groups and forums.

    These tracking tools rely extensively on big data and artificial intelligence. Newer AI algorithms make things a lot easier. They are evolving even further and will bring more benefits in the years to come.

    Most of these questions germ from the concerns of people who may have a different set of reasons to track cell phone location. A lot of concerned parents would want to find out where their kids usually go to after school or which places they usually visit with their friends on a frequent basis.

    Then there are employers who would want to find out who their employees meet outside during the office hours because they would not want their company's confidential information to be leaked. Last but not least, spouses would want to keep an eye on their partners' whereabouts in order to know if they are not being cheated in their relationship.

    In all of these cases, it seems there is only one way anyone can find out what's actually true and that is being able to instantly track the target person's cell phone location without them knowing.

    Predictive Analytics is Disrupting the Mobile Tracking Industry... in a Good way

    There are a number of new potential applications for mobile phone tracking, now that predictive analytics has reached the masses. One of the benefits is that new predictive analytics algorithms should be able to soon help people make better predictions about people's behavior based on previous trajectories with known coordinates.

    For example, if a high school student tends to go down a particular street, that could be an indication that he is on his way to visit his girlfriend. If the parents don't want him seeing her, then they will want to get advance notice so they can start looking at his phone. Predictive analytics algorithms make it a lot easier to make these predictions.

    Forecasting behavior with mobile data isn't an exact science. It takes time to collect enough data to make these assumptions. However, since most people have their phones on them virtually all the time, it shouldn't take more than a couple weeks to a month before predictive analytics algorithms will have enough data on these users to make these kinds of determinations.

    The Predictive Analytics Era of Cell Phone Monitoring Solutions

    Surely, with the latest technology, we have had the liberty of coming across several cell phone monitoring apps and cell phone trackers that have been solely designed for this very purpose. There are several apps that use big data to help you track your target's cell phone location secretly and within minutes. However, majority of them come with their own set of problems: some turn out to be bogus, some require you to perform a jailbreak, and some ask you to fill out online surveys in order to track a person’s cell phone location.

    In other instances, they may also ask you to download a third-party software or app into your phone or ask you to open a certain web link on your device. Now, these activities may be risky because you never know if the third-party app contains malicious content or the web link may inject malware into your device. Therefore, you should be avoiding such solutions altogether.

    After reading this information, you must be wondering how a cell phone monitoring app would still be able to find someone's cell phone location despite having a number of problems. A cell phone monitoring app does work effectively and perform its job of finding someone's location given it comes from a trustworthy company and has a good repute in the market. This will be very useful when these algorithms are merged with predictive analytics technology.

    A cell phone monitoring app is considered to be genuine and effective if it does not ask you to: download a third-party app/software on your device; open a certain web link to complete the process; fill out the online survey to provide them with your human verification, or perform a jailbreak. If you're not using the right cell phone monitoring app then you may increase the chance of getting caught and even expose the target phone to malware or other malicious viruses.

    Finding Cell Phone Location Via a Monitoring App

    The majority of the cell phone monitoring apps would require you to install the app into the target phone in order to help you find their current phone location. Finding someone's cell phone location without installing software may work for iPhones but it won't work for the Android phones.

    No physical access is required for iPhones while finding their cell phone location. All you need is access to their iTunes credentials so you are able to login into their Apple account and find their location using the 'Find My iPhone' feature.

    However, with an Android phone, you must have physical access to the target phone. In case your target has an Android phone, then you need physical access to their phone only for a few minutes so you're able to deploy the cell phone monitoring app successfully into their device.

    How Does a Monitoring App Find Someone's Cell Phone Location?

    A cell phone monitoring app helps you find someone's cell phone location secretly. All you need to do is download the cell phone monitoring app from the official website and then get it installed on the target cell phone. The minute the app is installed, you will receive credentials for your online dashboard from where you will be able to remotely track your target's cell phone location within minutes.

    What actually happens is, after the cell phone monitoring app is deployed on the target phone, it starts tracking their cell phone location. Some monitoring apps track the location using GPS while some do without the GPS. On the other hand, there are some that help you track your target's cell phone location with and without using GPS technology.

    Wherever your target goes, whatever places they visit, all the information regarding their whereabouts will be recorded and logged by the cell phone monitoring app at different intervals and then the same information will be shared with you on your online dashboard. Basically, all your questions regarding how to tap a cell phone will be answered using a cell phone monitoring app.

    Another reason why using a cell phone monitoring app is considered beneficial is because it lets you sneak into someone's cell phone and helps you finding their current phone location without them knowing. Yes, there is a good chance your target may not be able to know that their location is being tracked by you.

    For your understanding, let's give you an example. Suppose your target is using an Android phone and you want to keep a track of their cell phone location. Somehow, you will get hands on their phone to be able to install the cell phone monitoring app on it. Once installed, you will open the Applications list on their phone and hide the cell phone monitoring app icon so it does not remain visible to the target.

    After the icon has been hidden, your target won't be able to find out about the monitoring app being installed on their phone and as a result, they won't be able to tamper with it. This way you will be able to secretly track their cell phone location without making them suspicious or apprehensive.

    Predictive Analytics Changes the Future of Mobile Phone Tracking

    Predictive analytics is changing the future of mobile phone tracking in countless ways. It is going to play a very important role in forecasting people's behavior in ways that contemporary mobile tracking devices are unable to do.

    Author: Annie Qureshi

    Source: Smart Data Collective

  • The reinforcing relationship between AI and predictive analytics

    The reinforcing relationship between AI and predictive analytics

    Enterprises have long seen the value of predictive analytics, but now that AI (artificial intelligence) is starting to influence forecasting tools, the benefits may start to go even deeper.

    Through machine learning models, companies in retail, insurance, energy, meteorology, marketing, healthcare and other industries are seeing the benefits of predictive analytics tools. With these tools, companies can predict customer behavior, foresee equipment failure, improve forecasting, identify and select the best product fit for customers, and improve data matching, among other things.

    Enterprises of all sizes are now finding that the combination of predictive analytics and AI can help them stay ahead of their competitors.

    Forecasting gets a boost with AI

    Retail brands are constantly looking to stay relevant by associating themselves with the latest trends. Before each season, designers are continuously working on creating new styles and designs they think will be successful. However, these predictions can be faulty based on a number of factors, such as changes in customer buying patterns, changing tastes in particular colors or styles, and other factors that are difficult to predict.

    AI-based approaches to demand projection can reduce forecasting errors by up to 50%, according to Business of Fashion. This improvement can mean big savings for a retail brand's bottom line and positive ROI for organizations that are inventory-sensitive.

    Another industry that has seen tremendous improvements recently is meteorology and weather forecasting. Traditionally, weather forecasting has been prone to error. However, that is changing, as the accuracy of 5-day forecasts and hurricane tracking forecasts has improved dramatically in recent years.

    According to the Weather Channel, hurricane track forecasts are now more accurate five days in advance than two-day forecasts were in 1992. These extra few days can give people in a hurricane's path extra time to prepare and evacuate, potentially saving lives.

    Another example is the use of predictive analytics by utility companies to help spot trends in energy usage. Smart meters monitor activity and notify customers of consumption spikes at certain times of the day, helping them cut back on power usage. Utility companies are also helping customers predict when they might get a high bill based of a variety of data points and can send out alerts to warn customers if they are running up a large bill that month.

    Reducing downtime and disturbance

    For industries that heavily rely on equipment, such as manufacturing, agriculture, energy, mining etc., unexpected downtime can be costly. Companies are increasingly using predictive analytics and AI systems to help detect and prevent failures.

    AI-enabled predictive maintenance systems can self-monitor and report equipment issues in real time. IoT sensors attached to critical equipment can gather real-time data, spotting issues or potential problems as they arise and notifying teams so they can respond to them right away. The systems can also formulate predictions of upcoming issues, reducing costly unplanned downtime for instance.

    Power plants need to be monitored constantly to make sure they are functioning properly and safely, maing sure they are providing energy to the all the customers that rely on them for electricity. Predictive analytics is being used to help run early warning systems that can identify anomalies and notify managers of issues weeks to months earlier than traditional warning systems. This can lead to improved maintenance planning and more efficient prioritization of maintenance activities.

    Additionally, AI can help predict when a component or piece of equipment might fail, reducing unexpected equipment failure and unplanned downtime while also lowering maintenance costs.

    In industries which rely heavily on location data, such as mining, making sure you're operating in the correct area is paramount. Goldcorp, one of the largest gold mining companies in the world, partnered with IBM Watson to improve its targeting of new deposits of gold.

    By analyzing previously collected data, IBM Watson was able to improve geologists' accuracy of finding new gold deposits. Through the use of predictive analytics, the company was able to gather new information from existing data, better determine specific areas to explore next, and reach high-value exploration targets faster.

    Increased situational awareness

    Predictive analytics and AI are also great at anticipating situational events by collecting data from the environment and making decisions based on that data. This system is helping to predict future events based on data rather than just reacting to current data.

    Brands need to stay on top of their online presence, as well as what's being said about them on social media. Tracking social media to get real-time feedback from customers is important, especially for retail brands and restaurants. Bad reviews and negative comments can be detrimental, particularly for smaller brands.

    With this awareness and by tracking comments on social media in (near) real-time, companies can gather immediate feedback and respond to situations quickly. Situational awareness can also help with competition tracking, market awareness, market trend predictions and anticipated geopolitical problems.

    With companies of all sizes in every industry trying to stay ahead of their competitors and predict market trends, this forward-looking approach of predictive analytics is proving valuable. Predictive analytics is such a core part of AI application development that it is one of the core seven patterns of AI identified by AI market research and analysis firm Cognilytica.

    The use of machine learning to help give humans more data to make better decisions is compelling, and it's one of the most beneficial uses of machine learning technology.

    Author: Kathleen Walch

    Source: TechTarget

  • The vision of IBM on Analytics

    IBM’s Vision user conference brings together customers who use its software for financial and sales performance management (FPM and SPM, respectively) as well as governance, risk management and compliance (GRC). Analytics is a technology that can enhance each of these activities. The recent conference and many of its sessions highlighted IBM’s growing emphasis on making more sophisticated analytics eaBi-kring sier to use by – and therefore more useful to – general business users and their organizations. The shift is important because the IT industry has spent a quarter of a century trying to make enterprise reporting (that is, descriptive analytics) suitable for an average individual to use with limited training. Today the market for reporting, dashboards and performance management software is saturated and largely a commodity, so the software industry – and IBM in particular – is turning its attention to the next frontier: predictive and prescriptive analytics. Prescriptive analytics holds particular promise for IBM’s analytics portfolio.

    The three basic types of analytics – descriptive, predictive and prescriptive – often are portrayed as a hierarchy, with descriptive analytics at the bottom and predictive and prescriptive (often referred to as “advanced analytics”) on the next two rungs. Descriptive analytics is like a rear-view mirror on an organization’s performance. This category includes variance and ratio analyses, dashboards and scorecards, among others. Continual refinement has enabled the software industry to largely succeed in making descriptive analytics an easy-to-use mainstream product (even though desktop spreadsheets remain the tool of choice). Today, companies in general and finance departments in particular handle basic analyses well, although they are not as effective as they could be. Our research on next-generation finance analytics shows, for example, that most financial analysts (68%) spend the largest amount of their time in the data preparation phases while a relatively small percentage (28%) use the bulk of their time to do what they are supposed to be doing: analysis. We find that this problem is mainly the result of issues with data, process and training.

    The upward shift in focus to the next levels of business analytics was a common theme throughout the Vision conference. This emphasis reflects a key element of IBM’s product strategy: to achieve a competitive advantage by making it easy for most individuals to use advanced analytics with limited training and without an advanced degree in statistics or a related discipline.

    VR2 June 2015

    The objective in using predictive analytics is to improve an organization’s ability to determine what’s likely to happen under certain circumstances with greater accuracy. It is used for four main functions:

    • Forecasting – enabling more nuanced projections by using multiple factors (such as weather and movable holidays for retail sales)
    • Alerting – when results differ materially from forecast values
    • Simulation – understanding the range of possible outcomes under different circumstances
    • Modeling – understanding the range of impacts of a single factor.

    Our research on next-generation business planning finds that despite its potential to improve the business value of planning,  only one in five companies use predictive analytics extensively in their planning processes.

    Predictive analytics can be useful for every facet of a business and especially for finance, sales and risk management. It can help these functions achieve greater accuracy in sales or operational plans, financial budgets and forecasts. The process of using it can identify the most important drivers of outcomes from historical data, which can support more effective modeling. Because plans and forecasts are rarely 100 percent accurate, a predictive model can support timely alerts when outcomes are significantly different from what was projected, enabling organizations to better understand the reasons for a disparity and to react to issues or opportunities sooner. When used for simulations, predictive models can give executives and managers deeper understanding of the range of potential outcomes and their most important drivers.

    Prescriptive analytics, the highest level, help guide decision-makers to make the best choice to achieve strategic or tactical objectives under a specified set of circumstances. The term is most widely applied to two areas:

    • Optimization – determining the best choice by taking into account the often conflicting business objectives or other forms of trade-offs while factoring in business constraints – for example, determining the best price to offer customers based on their characteristics. This helps businesses achieve the best balance of potential revenue and profitability or farmers to find the least costly mix of animal feeds to achieve weight objectives.
    • Stochastic Optimization – determining the best option as above but with random variables such as a commodity price, an interest rate or sales uplift. Financial institutions often use this form of prescriptive analytics to understand how to structure fixed income portfolios to achieve an optimal trade-off between return and risk.

    General purpose software packages for predictive and prescriptive analytics have existed for decades, but they were designed for expert users, not the trained rank-and-file. However, some applications that employ optimization for a specific purpose have been developed for nonexpert business users. For example, price and revenue optimization software, which I have written about is used in multiple industries.  Over the past few years, IBM has been making progress in improving ease of use of general purpose predictive and prescriptive analytics. These improvements were on display at Vision. One of the company’s major initiatives in this area is Watson Analytics. It is designed to simplify the process of gathering a set of data, exploring it for meaning and importance and generating graphics and storyboards to convey the discoveries. Along the way, the system can evaluate the overall suitability of the data the user has assembled for creating useful analyses and assisting general business users in exploring its meaning. IBM offers a free version that individuals can use on relatively small data sets as a test drive. Watson is a cognitive analytics system, which means it is by nature a work in progress. Through experience and feedback it learns various things including terminologies, analytical methods and the nuances of data structures. As such it will become more powerful as more people use it for a wider range of uses because of the system’s ability to “learn” rather than rely on a specific set of rules and logic.

    Broader use of optimization is the next frontier for business software vendors. Created and used appropriately, optimization models can deliver deep insights into the best available options and strategies more easily, accurately, consistently and effectively than conventional alternatives. Optimization eliminates individual biases, flawed conventional wisdom and the need to run ongoing iterations to arrive at the seemingly best solution. Optimization is at the heart of a network management and price and revenue optimization, to name two common application categories. Dozens of optimization applications (including ILOG, which IBM acquired) are available, but they are aimed at expert users.

    IBM’s objective is to make such prescriptive analytics useful to a wider audience. It plans to infuse optimization capabilities it into all of its analytical applications. Optimization can be used on a scale from large to small. Large-scale optimization supports strategic breakthroughs or major shifts in business models. Yet there also are many more ways that the use of optimization techniques embedded in a business application – micro-optimization – can be applied to business. In sales, for example, it can be applied to territory assignments taking into account multiple factors. In addition to making a fair distribution of total revenue potential, it can factor in other characteristics such as the size or profitability of the accounts, a maximum or minimum number of buying units and travel requirements for the sales representative. For operations, optimization can juggle maintenance downtime schedules. It can be applied to long-range planning to allocate R&D investments or capital outlays. In strategic finance it can be used to determine an optimal capital structure where future interest rates, tax rates and the cost of equity capital are uncertain.

    Along the way IBM also is trying to make optimization more accessible to expert users. Not every company or department needs or can afford a full suite of software and hardware to create applications that employ optimization. For them, IBM recently announced Decision Optimization on Cloud (DOcloud), which provides this capability as a cloud-based service; it also broadens the usability ofIBM ILOG CPLEX Optimizer. This service can be especially useful to operations research professionals and other expert users. Developers can create custom applications that embed optimization to prescribe the best solution without having to install any software. They can use it to create and compare multiple plans and understand the impacts of various trade-offs between plans. The DOcloud service also provides data analysis and visualization, scenario management and collaborative planning capabilities. One example given by IBM is a hospital that uses it to manage its operating room (OR) scheduling. ORs are capital-intensive facilities with high opportunity costs; that is, they handle procedures that utilize specific individuals and different combinations of classes of specialists. Procedures also have different degrees of time flexibility. Without using an optimization engine to take account of all the variables and constraints, crafting a schedule is time-consuming. And since “optimal” solutions to business problems are fleeting, an embedded optimization engine enables an organization to replan and reschedule quickly to speed up decision cycles.

    Businesses are on the threshold of a new era in their use of analytics for planning and decision support. However, numerous barriers still exist that will slow widespread adoption of more effective business practices that take full advantage of the potential that technology offers. Data issues and a lack of awareness of the potential to use more advanced analytics are two important ones. Companies that want to lead in the use of advanced analytics need leadership that focuses on exploiting technology to achieve a competitive advantage.

    Author: Robert Kugel

  • Using Artificial Intelligence to see future virus threats coming

    Using Artificial Intelligence to see future virus threats coming

    Researchers use machine learning algorithms in novel approach to finding future zoonotic virus threats.

    Most of the emerging infectious diseases that threaten humans – including coronaviruses – are zoonotic, meaning they originate in another animal species. And as population sizes soar and urbanisation expands, encounters with creatures harbouring potentially dangerous diseases are becoming ever more likely.

    Identifying these viruses early, then, is becoming vitally important. A new study out today in PLOS Biology from a team of researchers at the University of Glasgow, UK, has identified a novel way to do this kind of viral detective work, using machine learning to predict the likelihood of a virus jumping to humans.

    According to the researchers, a major stumbling block for understanding zoonotic disease has been that scientists tend to prioritise well-known zoonotic virus families based on their common features. This means that there is potentially myriad viruses unrelated to known zoonotic diseases that have not been discovered, or are not well known, which may hold zoonotic potential – the ability to make the species leap.

    In order to circumvent this problem, the team developed a machine learning algorithm that could infer the zoonotic potential of a virus from its genome sequence alone, by identifying characteristics that link it to humans, rather than looking at taxonomic relationships between the virus being studied and existing zoonotic viruses.

    The team found that viral genomes may have generalisable features that enable them to infect humans, but which are not necessarily taxonomically closely related to other human-infecting viruses. They say this approach may present a novel opportunity for viral sleuthing.

    “By highlighting viruses with the greatest potential to become zoonotic, genome-based ranking allows further ecological and virological characterisation to be targeted more effectively,” the authors write.

    “These findings add a crucial piece to the already surprising amount of information that we can extract from the genetic sequence of viruses using AI techniques,” says co-author Simon Babayan.

    “A genomic sequence is typically the first, and often only, information we have on newly discovered viruses, and the more information we can extract from it, the sooner we might identify the virus’s origins and the zoonotic risk it may pose.

    “As more viruses are characterised, the more effective our machine learning models will become at identifying the rare viruses that ought to be closely monitored and prioritised for pre-emptive vaccine development.”

    Author: Amalyah Hart

    Source: Cosmos

  • Using Business Analytics to improve your business

    business analytics new

    I have often loudly advocated that enterprise performance management and corporate performance management is the integration of dozens of methods, like strategy maps, key performance indicator (KPI) scorecards, customer profitability analysis, risk management and process improvement. 

    But I have insisted that each method requires imbedded analytics of all flavors, and especially predictive analytics is 

    needed. Predictive analytics anticipate the future with reduced uncertainty to enable being proactive with decisions and not being reactive after the fact, when it may be too late.

    A practical example is analytics imbedded in strategy maps, the visualization of an executive team’s causally linked strategic objectives. Statistical correlation analysis can be applied among influencing and influenced KPIs. Organizations struggle with identifying what KPIs are most relevant to measure and then determine what the best target is for that measure. 

    Software from business analytics vendors can now calculate the strength or weakness of causal relationships among the KPIs using correlation analysis and display them visually, such as with the thickness or colors of the connecting arrows in a strategy map. This can validate the quality of KPIs selected. It creates a scientific laboratory for strategy management.


    Using the example of professional baseball, an evolving application of business analytics relates to dynamic home stadium ticket prices to optimize revenues. The San Francisco Giants experiment with mathematical equations that weigh ticket sales data, weather forecasts, upcoming pitching matchups and other variables to help decide whether the team should incrementally raise or lower prices right up until game day. The revenue from a seat in a baseball stadium is immediately perishable after the game is played. So any extra available seat sold at any price directly drops to the bottom line as additional profit.

    Another baseball analytics example involves predicting player injuries, which are increasing at an alarming rate. Using an actuarial approach similar to the insurance industry, the Los Angeles Dodgers’ director of medical services and head athletic trainer, Stan Conte, has been refining a mathematical formula designed to help the Dodgers avoid players who spend their days in the training room and not on the ball field. A player on the injured reserve list is expensive in terms of the missed opportunity from their play and the extra cost to replace them. Conte has compiled 15 years of data plus medical records to test his hypothesis that predict the chances a player will be injured and why. 

    Greater statistical analysis is yet to come. The New York Times has reported on new technology that could shift previously hard-to-quantify baseball debates such as the rangiest shortstop or the quickest center fielder from the realm of argument to mathematical equations. A camera and associated software records the precise speed and location of the ball and every player on the field. It dynamically digitizes everything, allowing a treasure trove of new statistics to analyze. Which right fielders charge the ball quickest and then throw the ball the hardest and most accurately? Guesswork and opinion will give way to fact-based measures. 

    An obsession for baseball statistics

    Gerald W. Sculley was an economist most known for his article, “Pay for Performance in Major League Baseball,” which was published in The American Economic Review in December 1974. The article described a method of determining the contribution of individual players to the performance of their teams. He used statistical measures like slugging percentage for hitters and the strikeout-to-walk ratio for pitchers and devised a complex formula for determining team revenue that involved a team’s won-lost percentage and market characteristics of its home stadium, among other factors.

    The Society for American Baseball Research (www.sabr.org), of which I have been a member since the mid-1980s, includes arguably the most obsessive “sabermetrics” fanatics. As a result of hard efforts to reconstruct detailed box scores of every baseball game ever played, and load them into accessible databases, SABR members continue to examine daily every imaginable angle of the game. Bill James, one of SABR’s pioneers and author of The Bill James Baseball Abstract, first published in 1977, is revered as a top authority of baseball analytics.

    How does an organization create a culture of metrics and analytics? Since it is nearing baseball’s World Series time an example is the community of baseball, including its managers, team owners, scouts, players and fans. With better information and analysis of that information, baseball teams perform better – they win!

    Legendary baseball manager Connie Mack’s 3,776 career victories is one of the most unbreakable records in baseball. Mack won nine pennants and five World Series titles in a career that spanned the first half of the 20th century. One way he gained an advantage over his contemporary managers was by understanding which player skills and metrics most contributed to winning. He was way before his time in that he favored hitting power and on-base percentage players to those with a high batting average and speed – an idea that would later become the standard throughout the sport.

    The 2003 book about the business of baseball, Moneyball, describes the depth of analytics that general managers like Billy Beane of the Oakland Athletics apply to selecting the best players, plus batter and pitcher tactics based on the conditions of the team scores, inning, number of outs, and runners on base.

    More recently, the relatively young general manager of the Boston Red Sox, Theo Epstein (who is now with the Chicago Cubs), assured himself of legendary status for how he applied statistics to help overcome the Curse of the Bambino – supposedly originating when the team sold Babe Ruth in 1920 to the New York Yankees – to finally defeat their arch-rival Yankees in 2004 and win a World Series. It ended Boston’s 86-year drought – since 1918 – without a World Series title.

    Author: Gary Cokins

    Source: Information Management

  • Visualization, analytics and machine learning - Are they fads, or fashions?

    Machine learningI was recently a presenter in the financial planning and analysis (FP&A) track at an analytics conference where a speaker in one of the customer marketing tracks said something that stimulated my thinking. He said, “Just because something is shiny and new or is now the ‘in’ thing, it doesn’t mean it works for everyone.”

    That got me to thinking about some of the new ideas and innovations that organizations are being exposed to and experimenting with. Are they fads and new fashions or something that will more permanently stick? Let’s discuss a few of them:


    Visualization software is a new rage. Your mother said to you when you were a child, “Looks are not everything.” Well, she was wrong. Viewing table data visually, like in a bar histogram, enables people to quickly grasp information with perspective. But be cautious. Yes, it might be nice to import your table data from your spreadsheets and display them in a dashboard! Won’t that be fun? Well it may be fun, but what are the unintended consequences of reporting performance measures as a dial or barometer?

    A concern I have is that measures reported in isolation of other measures provides little to no context as to why the measure is being reported and what “drives” the measure. Ideally dashboard measures should have some cause-and-effect relationship with key performance indicators (KPIs) that should be derived from a strategy map and reported in a balanced scorecard. 

    KPIs are defined as monitoring the progress toward accomplishing the 15-25 strategic objective boxes in a strategy map defined by the executive team. The strategy map provides the context from which the dashboard performance indicators (PIs) can be tested and validated for their alignment with the executive team’s strategy.

    Business analytics

    Talk about something that is “hot.” Who has not heard the terms Big Data and business analytics? If you raised your hand, then I am honored that I am apparently the first blogger you have ever read. Business analytics is definitely a next managerial wave. I am biased towards them because my 1971 university degree was in industrial engineering and operations research. I love looking at statistics. So do television sports fans who are now provided “stats” for teams and players in football, baseball, golf and every kind of televised sport. But the peril of business analytics is they need to serve a purpose for problem solving or seeking opportunities. 

    The analytics thought leader James Taylor advises, “Work backwards with the end in mind.” That is, know why you are applying analytics. Experienced analysts typically start with a hypothesis to prove or disprove. They don’t apply analytics as if they are searching for a diamond in a coal mine. They don’t flog the data until it confesses with the truth. Instead, they first speculate that two or more things are related or that some underlying behavior is driving a pattern seen in various data.

    Machine learning and cognitive software

    There are an increasing number of articles and blogs with this theme related to artificial intelligence – the robots are coming and they will replace jobs. Here is my take. Many executives, managers, and organizations underestimate how soon they will be affected and the severity of the impact. This means that many organizations are unprepared for the effects of digital disruption and may pay the price through lower competitive performance and lost business. Thus it is important to recognize not only the speed of digital disruption, but also the opportunities and risks that it brings, so that the organization can adjust and re-skill its employees to add value.

    Organizations that embrace a “digital disruptor” way of thinking will gain a competitive edge. Digitization will create new products and services for new markets providing potentially substantial returns for investors in these new business models. Organizations must either “disrupt” or “be disrupted”. Companies often fail to recognize disruptive threats until it is too late. And even if they do, they fail to act boldly and quickly enough. Embracing “digital transformation” is their recourse for protection.

    Fads or fashions?

    Are these fads and fashions or the real deal? Are managers attracted to them as the shiny new toys that they must have on their resume for their next bigger job and employer? My belief is these three “hot” managerial methods and tools are essential. But they need to be thought through and properly designed and customized; and not just slapped in willy-nilly just to have them as shiny new toys.

    Bron: Gary Cokins (Information Management)

  • Voorspelmodellen voor efficiëntere en effectievere campagnes

    voorspelmodel grafiek


    voorspelmodellenWie benader je om conversie te verhogen of churn te verlagen? Over welke kanalen verdeel je je marketingbudget? Ontwikkel je een dure brochure of ga je voor een goedkoper kanaal als e-mail? Vragen die je als marketeer stelt bij het opstellen van een nieuwe campagne. Een voorspelmodel geeft antwoordenDe vijf W’s van de marketing

    Bij het opstellen van een nieuwe campagne wordt vaak gebruik gemaakt van de vijf W’s:

    • Waarom wil ik deze campagne voeren?
    • Wie wil ik benaderen?
    • Wat moet ik ze aanbieden?
    • Waar moet ik ze benaderen?
    • Wanneer moet ik ze benaderen?

    In de praktijk blijft het lastig om op al deze vragen een goed en onderbouwd antwoord te geven. Het waarom is vaak wel duidelijk. Het gaat vaak om het realiseren van verkoop of het voorkomen van churn. Maar wie je het beste kan benaderen om conversie te verhogen of churn te verlagen, met welke boodschap en via welk kanaal is niet altijd helder. Zonde, want je wil je campagne zo effectief en efficiënt mogelijk inrichten.

    De praktijk

    Neem een bedrijf dat producten en diensten verkoopt aan consumenten op basis van een abonnement of contract. Denk aan een leaseauto, telefoonabonnement, energiecontract of een verzekering. Op een gegeven moment loopt het contract af en rijst de vraag: gaat de klant het contract (stilzwijgend) verlengen, of loopt hij naar de concurrent?

    Nu kan je iedere klant, waarvan het contract afloopt benaderen met de vraag of ze alsjeblieft klant blijven. Uit het verleden is gebleken dat maar 20% van deze klanten ook daadwerkelijk opzegt. Gevolg is dat je:

    • onnodige kosten maakt. 100.000 brochures of telefoontjes kosten aanzienlijk meer dan 20.000;
    • slapende honden wakker gaat maken, met de kans dat je klanten denken “Hé, ik kan mijn contract beëindigen, laat ik eens kijken of er ergens anders een leuke aanbieding is”;
    • je klanten lastig gaat vallen met onnodige communicatie en je voortaan in de spam inbox beland.

    Om dit te voorkomen wil je weten welke klanten een verhoogde kans hebben om hun contract op te zeggen. In deze situatie is het zeer verstandig om een voorspelmodel in te zetten en hiermee je targetgroep te bepalen.

    Het model: bepaal je high risk klanten

    Met een voorspelmodel schat je de kans dat een klant waar het contract afloopt, deze ook daadwerkelijk opzegt. Je gebruikt zoveel mogelijk beschikbare informatie als (historische) klant, marketing-, sales- en/of webdata als input voor het model.

    Allereerst identificeer je de klanten die in het verleden uit contract liepen en kijkt of zij hun contract hebben beëindigd.
    Daarna bepaal je met het model welke (klant)kenmerken samenhangen met het uiteindelijk wel of niet opzeggen. Concreet zijn dit persoonskenmerken als geslacht, inkomen, het aantal jaren dat iemand al klant is en of er ook andere producten worden afgenomen.

    Vervolgens geeft het model alle geselecteerde kenmerken een gewicht mee. Deze gewichten worden geprojecteerd op de klanten waar de aankomende tijd het contract van afloopt. Zo krijg je per klant de kans dat hij je gaat verlaten in een bepaalde periode.

    Creëer inzichten voor content en kanaal

    Nu bekend is wat de high risk klanten zijn, kan je de targetgroep voor de campagne afbakenen. Je weet dus Wie je moet benaderen. Met een geavanceerder duurmodel kan je ook bepalen Wanneer je iemand moet benaderen.

    De uitkomsten van het model zet je vervolgens in om je targetgroep in te delen in verschillende risicogroepen. Zo bepaal je per groep met Welk marketingkanaal je ze benadert. Klanten met het hoogste risico om ze kwijt te raken wil je persoonlijk benaderen met een goede aanbieding. Klanten met een lager risico kan je dan via een goedkoper kanaal benaderen zoals e-mail. Op deze manier zet je je marketingbudget gericht in.

    Wat wordt de boodschap van je campagne? Dat bepaal je door de kenmerken van je targetgroep in kaart te brengen. Gaat het om jonge of oude mensen? Zijn het relatief vaak vrouwen? Zijn ze al lang klant? Deze informatie is bruikbaar voor de bepaling van de content van je campagne. Doordat je de boodschap gericht afstemt op je doelgroep, worden je campagneresultaten beter!

    De voordelen van voorspelmodellen

    Of het nu gaat om campagnes gericht op prospects, upsell of churn, door gebruik te maken van voorspelmodellen worden je campagnes efficiënter en effectiever. Ze helpen je bij de invulling van de W’s en resulteert uiteindelijk weer in een hogere Return on Investment. Voorspelmodellen:

    • helpen je bij het vaststellen van je targetgroep (Wie)
    • bieden handvaten voor effectieve content van je campagne (Wat)
    • leiden tot een efficiëntere inzet van je marketingbudget via de marketingkanalen (Waar)
    • kunnen inschatten op welk moment je iemand moet benaderen (Wanneer)


    Door: Olivier Tanis

    Bron: 2Organize.nl

  • Waarom een leergang Business Data Scientist

    data scientist

    Elke organisatie die veranderd, is op zoek. Die zoektocht heeft vaak betrekking op data: hoe kunnen we data beter toepas

    sen? Hoe kunnen we nieuwe toepassingen voor data vinden? Hebben we wel de juiste data? Wat moeten we doen met data science en big data? Hoe kunnen we data inzetten om betere besluiten te nemen en dus ook beter te presteren?

    Organisaties moeten een antwoord vinden op deze vragen. Deels onder druk van de verder ontwikkelende markt en veranderende concurrentie. Daarmee krijgt data een centrale plaats in de bedrijfsvoering en worden organisaties dus 'data driven'.

    Uiteraard heb je hier 'data science' voor nodig: de omgevingen en vaardigheden om data te ontleden, analyseren en te vertalen naar modellen, adviezen en besluiten.

    We hebben de leergang business data scientist ontworpen omdat geen bedrijf met alleen tools en technieken succesvol gaat worden. Het is juist de business data scientist die de brug vormt tussen data science en de verandering die in organisaties plaats vindt.

    Te vaak ligt bij organisaties de nadruk op de technologie (Hadoop? Spark? Data lake? Moeten we R leren?). Om succesvol te zijn met data heb je ook andere instrumenten nodig. Bedrijfskundige modellen, business analyse, strategievorming helpen om de juiste vragen te formuleren en doelen te stellen. Softskills en veranderkundige vaardigheden om die doelen zichtbaar te maken voor opdrachtgevers en stakeholders. Kennis van data science, architectuur, methoden en organisatiemodellen geeft de inzichten om data science in een organisatie in te passen. Visie en leiderschap is nodig om data sc

    ience in een organisatie te laten werken. Ons doel is om deelnemers dit te laten zien. De opleiding is ontworpen om al deze aspecten samen te laten komen en bruikbare instrumenten te geven.

    Wat ik het leukste vind van deze leergang? Steeds weer de brug maken naar: wat gaan we nu doen, hoe breng je de theorie in praktijk brengen. Elke deel theorie wordt vertaald naar een praktische toepassing in de casus. En dat is de eerste stap naar het halen van successen met data science in je eigen werk, team, afdeling, divisie of organisatie.

    Meer weten? Interesse? Op 28 november is er een thema avond over de Business Data Scientist in Utrecht. Aanmelden kan via de Radboud Management Academy!

    Deze blog is verschenen op www.businessdecision.nl.

    Auteur: Alex Aalberts


  • What Can Retailers Do To Elude Extinction?

    ExtinctHere's what you didn't learn in school about the disruption affecting retail today. A recent article by consultant Chris H. Petersen, "Seven disruptive trends that will kill the bigstock-Extinct-150-79929610-copy'dinosaurs of retail'" discussed the fate of "25 retail dinosaurs that vanished in the last 25 years" which was the subject of an Entrepreneur article. Those retailers included giants such as Circuit City, Comp USA, Blockbuster, Borders, and Tower Records, companies which literally dominated their category or channel. Others named in the article were retail innovators in their own right until new disruptors outgunned them. The point is that neither longevity, size, or specialization guarantee retail survival today. So how can today's retailers avoid being extinguished by current disruptive innovations?

    Disruptive innovation refers to any enhanced or completely new technology that replaces and disrupts an existing technology, rendering it obsolete. (Picture how we went from the Model T to the KIA; from giant mainframes to personal computers; or from fixed-line telephones to cellphones/smartphones).

    Disruptive innovation is described by Harvard Business professor Clayton Christensen as a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.

    Today's major disruptive retail trends have led to the rise of the consumer, the rise of technology to help retailers best serve the consumer while wrestling with competitive forces, and the demise of "the old way of doing business."

    I. The Consumer.

    Evolving, innovative, disruptive technology has led to consumer-dominated behavior that reaches across many channels. As we know, today's consumer now shops any time and everywhere using a variety of helping tools.

    The consumer is capable of having a personal, seamless experience across their entire shopping journey to explore, evaluate and purchase, tempered by how retailers DO business, provide service, deal with their competition, etc.

    * The consumer journey starts online, although stores remain a destination for experience.

    What can retailers do? The successful retailer of the future needs to not only master online and offline, but how to connect with the consumer across many touch points, especially social media.

    * Mobile juggernaut. The latest stats show that there are now more cell phones in use than people on this planet. Smartphones now exceed 4.5 billion. Mobile is the majority and will be the preferred screen for shopping.

    What can retailers do? Retail survivors must optimize for mobile engagement, and also broadcast offers and connect with consumers wherever they are. The store of the future will not only have beacons to connect, but to track traffic via mobile as well.

    * Stock availability / Virtual aisle / Endless shelf. More than 50 percent of consumers expect to shop online and see if stock is available in store.

    Omni channel consumers now fully realize that stores can't begin to stock every model, style and color. While consumers can see hundreds if not thousands of products in store, they know that there are millions online.

    What can retailers do? The survivors are literally creating a seamless experience between online, store and mobile apps so the consumer can "have it their way" anywhere, anytime.

    * Consumer experience still rules. Consumer experience still needs to come down to senses: Tactile, visual, and psychological.

    What can retailers do? Virtual dressing rooms, better in-store experiences, and adoption of new disruptive technology to address and satisfy these issues.

    * Personalization of products and services.

    What can retailers do? New survivors are emerging with "mass personalization" opportunities to custom tailor your clothes or curate your personal wardrobe assortment and send it to you.

    * Social Connections and the influence of the opinions of others. Social has become a primary source of research and validation on what to buy. Today's consumers are 14 times more likely to believe the advice of a friend than an ad.

    What can retailers do? Today's major brands are giving much more attention to and spending more dollars on social media than traditional media.

    II. Technology

    Disruptors share the common purpose to create businesses, products and services that are better -- usually less expensive and always more creative, useful, impactful -- and scalable.

    What can retailers do? Put into use as soon as possible disruptive technology solutions such as price and assortment intelligence, behavioral economics, customer experience analytics, predictive analytics, and more to help understand, meet, and outgun the competition and service the customer.

    A Note on Predictive Analytics.

    Dr. Christensen subscribes to predictive analytics as, "the ability to look at data from the past in order to succeed in new ways the future." Predictive analytics solutions, the capability to forecast consumer purchase trends in order to sell the most products at the best prices at any given time are coming on strong.

    Bottom Line For Your Bottom Line

    There's never been a time of more disruptive change in retail. Retailers who are the most adaptable to change -- and not the strongest nor more intelligent of the species -- will be the ones to survive.

    It's a case of keeping yourself on top of the tsunami of change through the mastery of today's and tomorrow's new disruptive technologies.

    *Thanks to Chris H. Petersen, PhD, CEO of Integrated Marketing Solutions, a strategic consultant who specializes in retail, leadership, marketing, and measurement.

    Source: upstreamcommerce.com, February 8, 2015

  • What is Machine Learning? And which are the best practices?

    What is Machine Learning? And which are the best practices?

    As machine learning continues to address common use cases it is crucial to consider what it takes to operationalize your data into a practical, maintainable solution. This is particularly important in order to predict customer behavior more accurately, make more relevant product recommendations, personalize a treatment, or improve the accuracy of research. In this blog, we will attempt to understand the meaning of Machine Learning, what it takes for it to work and which are the best machine learning practices.

    What is ML?

    Machine learning is a computer programming technique that uses statistical probabilities to give computers the ability to 'learn' without being explicitly programmed. Simply put, machine learning 'learns' based on its exposure to external information. Machine learning makes decisions according to the data it interacts with and uses statistical probabilities to determine each outcome. These statistics are supported by various algorithms modeled on the human brain. In this way, every prediction it makes is backed up by solid factual, mathematical evidence derived from previous experience.

    A good example of machine learning is the sunrise example. A computer, for instance, cannot learn that the sun will rise every day if it does not already know the inner workings of the solar system and our planets, and so on. Alternatively, a computer can learn that the sun rises daily by observing and recording relevant events over a period of time.

    After the computer has witnessed the sunrise at the same time for 365 consecutive days, it will calculate, with a high probability, that the sun will rise again on the three hundred and sixty-sixth day. That is, of course, there will still be an infinitesimal chance that the sun won't rise the day after as the statistical data collected thus far will never allow for a 100% probability.

    There are three types of machine learning:

    1. Supervised Machine Learning

    In supervised machine learning, the computer learns the general rule that maps inputs to desired target outputs. Also known as predictive modeling, supervised machine learning can be used to make predictions about unseen or future data such as predicting the market value of a car (output) from the make (input) and other inputs (age, mileage, etc).

    2. Un-supervised Machine Learning

    In un-supervised machine learning, the algorithm is left on its own to find structure in its input and discover hidden patterns in data. This is also known as 'feature learning'.

    For example, a marketing automation program can target audiences based on their demographics and purchasing habits that it learns.

    3. Reinforcement Machine Learning

    In reinforcement machine learning, a computer program interacts with a dynamic environment in which it must perform a certain goal, such as driving a vehicle or playing a game against an opponent. The program is given feedback in terms of rewards and punishments as it navigates the problem space, and it learns to determine the best behavior in that context.

    Making ML work with data quality

    Machine Learning depends on data. Good quality data is needed for successful ML. The more reliable, accurate, up-to-date and comprehensive that data is, the better the results will be. However, typical issues including missing data, inconsistent data values, autocorrelation and so forth will affect the statistical properties of the datasets and interfere with the assumptions made by algorithms. It is vital to implement data quality standards with your team throughout the beginning stages of the machine learning initiative.

    Democratizing and operationalizing

    Machine Learning can appear complex and hard to deliver. But if you have the right people involved from the beginning, with the right skills and knowledge, there will be less to worry about.

    Get the right people on your team involved who:

    • can identify the data task, chose the right model and apply the appropriate algorithms to address the specific business case
    • have the skills in data engineering are useful, as machine learning is all about data
    • will choose the right programming language or framework for your needs
    • have a background in general logic and basic programming is vital in order
    • have a good understanding of core mathematics to help you manage most standard machine learning algorithms effectivelym, especially Linear Algebra, Calculus, Probability, Statistics and Data and Frameworks

    Most importantly, share the wealth. What good is a well-designed machine learning strategy if the rest of your organization cannot join in on the fun. Provide a comprehensive ecosystem of user-friendly, self-service tools that incorporates machine learning into your data transformation for equal access and quicker insights. A single platform that brings all your data together from public and private cloud as well as on-premise environments will enable your IT and business teams to work more closely and constructively while remaining at the forefront of innovation.

    Machine Learning best practices

    Now that you are prepared to take a data integration project that involves machine learning head-on, it is worth following these best practices below to ensure the best outcome:

    1. Understand the use case – Assessing the problem you are trying to solve will help determine whether machine learning is necessary or not.
    2. Explore data and scope – It is essential to assess the scope, type, variety and velocity of data required to solve the problem.
    3. Research model or algorithm – Finding the best-fit model or algorithm is about balancing speed, accuracy and complexity.
    4. Pre-process – Data must be collated into a format or shape which is suitable for the chosen algorithm.
    5. Train – Teach your model with existing data and known outcome.
    6. Test – Test against non-associated data without known outcomes to test accuracy
    7. Operationalize – After training and validating, start calculating and predicting outcomes with new data.

    As data increases, more observations are made. This results in more accurate predictions. Thus, a key part of a successful data integration project is creating a scalable machine learning strategy that starts with good quality data preparation and ends with valuable and intelligible data. 

    Author: Javier Hernandez

    Source: Talend

  • What is Predictive Intelligence and how it’s set to change marketing in 2016

    Screen-Shot-2016-01-27-at-14.02.51Explaining how modelling of marketing outcomes can let you make smarter marketing decisions

    As 2016 gets under way we're seeing more discussion of the applications of Predictive Intelligence. It’s a nascent field, but one that is gaining popularity fast and for some very good reasons, which we will discuss in a lot more detail in this article. We’re going to start this article off by explaining what precisely Predictive Intelligence is, we’re then going to provide some hard stats on its impact in the marketing world so far and are going to finish off by explaining how we feel it’s set to shape marketing in 2016 and beyond.

    What Is Predictive Intelligence?

    Despite the buzz surrounding Predictive Intelligence, many still don’t know what it actually is, so here is our definition. Predictive Intelligence is often used interchangeably with terms like Predictive Recommendation, Predictive Marketing and Predictive Analytics. Although there are some minor differences between the terms, broadly speaking they all essentially mean the same thing.

    Our definition of predictive intelligence for marketing is:

    "Predictive Intelligence is the process of first collecting data on consumers and potential consumers’ behaviours/actions from a variety of sources and potentially combining with profile data about their characteristics.

    This data is then distilled and interpreted, often automatically, by sophisticated algorithms, from which a set of predictions are made, and based on these, rules are developed to deliver relevant communications and offers to consumers to persuade them to engage with a business to meet its goals".
    You can see that because of the three-step process of analysis, interpretation and implementing rules for automated communications, a single sentence definition is difficult to devise! But, we hope this shows the essence of Predictive Marketing.

    McKinsey view it as applying mathematical models to best predict the probability of an outcome. They cite customer relationship manager example using models to estimate the likelihood of a customer changing providers known as ‘churn’. Other examples uses sources including everything from CRM data, marketing data, and structure data such as click through rates or engagement levels.

    The relevant actions that are carried out based on this distilled and interpreted data are that of predicting and then executing the optimum marketing message (e.g. image based vs text heavy / formal vs informal) to specific customer’s/potential customers across the optimum marketing channel(s) (e.g. social media vs email), at the optimum time(s) (e.g. morning vs afternoon) in order achieve your company’s marketing goals. These goals being usually higher engagement and/or sales. In summary, you are communicating in a way that is simultaneously most relevant and preferred by the customers/potential customers and most likely to result in you achieving your marketing goal(s).

    Essentially, you set what the marketing goal is and the Predictive Intelligence algorithms will then make good use of the collected data to find the optimum way of achieving it. Predictive Intelligence, aims to deliver content based on customer needs essentially tailoring the experience for the person receiving the information. Predictive Intelligence, empowered by data, thus begins to usher in true personalised one to one marketing communication that is aligned with a company’s marketing goals.

    Some stats and examples showing the value of predictive intelligence

    While we’re sure all the above sounds great to you, understandably, there is nothing more convincing than some cold hard stats on how Predictive Intelligence is actually performing. So without further ado, check out the below.

    As mentioned in IdioPlatform.com, research firm Aberdeen Group conducted an in depth survey for their paper titled “Predictive Analytics In Financial Services” where they interviewed 123 financial services companies. They uncovered that the companies utilising Predictive Analytics typically managed to achieve an 11 per cent increase in the number of clients they secured in the previous 12 months. Further, they saw a 10 per cent increase in the number of new client opportunities that were identified in comparison to those that have not utilised Predictive Analytics. Pretty impressive.

    Additionally, a Forbes Insights survey of 306 execs from companies with $20 million or more in annual revenue found that of the companies that have been carrying out predictive marketing initiatives for at least 2 years, 86% of them have “increased return on investment as a result of their predictive marketing”.

    Finally, in a study by toneapi.com it was found that by understanding the correlation between emotions expressed in the communications and the subsequent click-through rate. Based on this insight, toneapi.com was able to use the understanding of the model to predict how the airline could increase its click-through rates by appealing to certain emotions that would generate more interest from its customers.

    In summary, Predictive Intelligence drives marked improvements across marketing channels.

    The Emotional Connection

    Initially one of the big advantages of algorithmic Predictive Intelligence was the removal of emotion from the equation; human feelings and mood played little part in the decision as the computers choose the best course of action based on hard data. Now, as processing speeds and storage increase and the analysis of unstructured data improves we are seeing companies move into a more fluid form of Predictive Intelligence based around sentiment and emotion. The driver for this is that emotional analysis of text can help drive understanding of the dynamics that are causing key effects. These insights can be then used to optimise the content to match these emotions creating a more iterative and action orientated approach to marketing campaigns.

    These companies look at the emotions which motivate behaviour and utilize technology to predict and improve results. Toneapi analyses freeform text content (such as emails, press releases and brand copy) for emotional impact and then offers up suggestions for improvements. Likewise Motista studies have shown that “emotion is the most predictive driver of customer behavior”, they bring together big data and social science to increase profitability.

    Looking To 2016 And Beyond

    Up until now Predictive Intelligence has seen most action in the B2B world. B2B companies have been using it to crunch colossal amounts of data on customer/potential customer behaviours from a variety of sources. They have then been using it to automatically draw insights from this data based on a set of signals in order to score their leads, identify the most attractive leads earlier on and uncover new high value leads previously unseen. Crucially, Predictive Intelligence has then allowed the B2B companies to tailor the marketing approach and messaging depending on the customer/potential customer’s actions/behaviours (signals) across the different areas where the data has been collected.

    We believe that in 2016 we’re going to see more of the above, and the process is going to become even more refined and sophisticated within the B2B sector. Also, we feel 2016 is the year we see Predictive Intelligence move more and more in the B2C world too, especially now that its frequent use across industry sectors in B2B has proven its effectiveness and given the approach some real credibility. And, we see more interest in Predictive Intelligence around emotion analytics, free-form text, unstructured data and behavioural social sciences.

    Additionally, now, unlike even a couple of years ago, there are quite a few smaller Predictive Intelligence companies on the market in addition to the big names like IBM or Salesforce. Many of these companies are offering intuitive, easy to understand, well designed and well-priced cloud based Predictive Intelligence software packages. This lowers the barrier to entry greatly for Small-to-Medium businesses (SMB’s). It allows them to dip their toes into the world of Predictive Intelligence and test the waters, with little risk or friction, or if they wish, jump straight into the deep end of Predictive Intelligence and reap the rewards.

    Thus a whole new world has opened up to the SMB. A world that not too long ago was reserved mostly for the large corporations that could afford the expensive Predictive Analytics software (which was the only real choice) or that had the budgets big enough to hire data scientists to crunch data and draw insights from it from which to base their predictions.


    We hope this article has gone some way in demystifying the phrase “Predictive Intelligence”. Further, we hope we have communicated the immense benefits to be reaped if Predictive Intelligence is executed properly. Benefits in the form of higher engagement, higher click through rates, higher conversion rates and emotional impact. Predictive Intelligence has already seen some real traction in the B2B world, and we believe 2016 will mark the year that the B2C companies and SMB’s in general adopt Predictive Intelligence in a meaningful way. Some dipping their toes in and giving it a try and others jumping straight into the deep end and really embracing it.

    Source: Smart Insights

  • Why predictive analytics will be big in 2015

    predictive analysisAs the big data analytics sector rapidly matures, companies will be increasingly asking what they can get out of the technology this year. And one of the big trends for 2015 will be companies aiming to move beyond a reactive approach to data to a more proactive strategy.

    At the heart of this will be predictive analytics, which has been tipped by several commentators as one of the key developments in the industry for this year. This will be in response to increased demands to tackle the issue of problems and opportunities being spotted too late.

    For instance, it was noted by Tech Vibes writer Christopher Surdak that often, by the time businesses have extracted, transformed and loaded the relevant data using traditional technologies, the chance has passed - and this can defeat the purposes of many investments. He said: "There's no value in identifying what customers are doing every minute of the day if you can't respond predictively and proactively."

    He therefore stated that a top priority for organisations in 2015 will be to re-engineer their big data environments to enable information streams to be accessed, analysed and shared in real time. Benefits of this will include increased revenue, better productivity for knowledge workers, and lower costs.

    A report by TDWI Research, cited by Forbes, identified five key reasons why companies are looking to invest in predictive analytics. These are to predict trends, understand customers, improve business performance, drive strategic decision-making, and predict behaviour.

    Forbes highlights the telco sector as one that particularly stands to benefit from what predictive and real-time analytics can provide. Firms in this sector are looking for new ways to stand out from the crowd in order to retain customers, and these tools can help them better understand customers, and therefore serve them more effectively.

    One example the publication cited was Cox Communications, which turned to predictive analytics to identify business drivers for growth and then pinpoint existing and prospective customers to cultivate new offerings. The firm also wanted to answer tough questions about why customers would choose them over a competitor - or vice versa - as well as what type of customer is likely to buy a specific product

    With predictive analytics, the company was able to put more campaigns into the field, as well as measure the effectiveness of different offers and marketing techniques to different customer segments, Forbes said. As a result, recent campaigns have generated an 18 per cent increase in customer responses.

    The fact predictive analytics can be engaged to tackle a wide range of issues and challenges will be a key factor in the growth of the sector in 2015 - with every department from sales and marketing to human resources being able to derive benefits from the technology.

    For instance, predictive tools can be very useful to HR professionals by enabling them to spot employees who are at risk of leaving a company, as well as better identifying prospective hires who will be a good fit for an organisation.

    Speaking to Tech Target, vice-president and principal analyst at Constellation Research Holger Mueller said: "One of the first areas that vendors have tackled [with predictive analytics] was around 'flight risk', or determining if a valued employee could leave a company. These days we see them used more for recruiting and selecting highly-skilled candidates for the right position."

    Kognitio, 16 Januari 2015

EasyTagCloud v2.8