25 items tagged "artificial intelligence "

  • 'We moeten ons voorbereiden op massawerkloosheid door robots'

    kunstmatige-intelligentie 0De opkomst van robots en kunstmatige intelligentie zorgt ervoor dat steeds meer banen verdwijnen. We moeten ons nu al voorbereiden op massawerkloosheid, waarschuwen hoogleraren.

    We komen in een tijdperk waarin machines bijna alle menselijke taken kunnen overnemen, zei hoogleraar computertechnologie Moshe Vardi van de Universiteit van Texas dit weekend op een Amerikaanse beurs, schrijft FT. Er blijft geen sector vrij van robots, dus de grote vraag volgens de wetenschappers is: als robots ons werk overnemen, wat gaan wij dan doen?

    Vardi voegt daaraan toe dat we allemaal wel leuke dingen kunnen gaan doen, maar dat een leven dat alleen draait om vrije tijd ook niet alles is. "Ik geloof dat werk essentieel is voor het welzijn van mensen."

    'Overheden nog niet klaar voor'
    Bedrijven als Google, Facebook, IBM en Microsoft schalen dit jaar hun investeringen in kunstmatige intelligentie op naar miljarden, maar overheden lijken daar nog niet op voorbereid, stelden experts op de beurs.

    Op initiatief van Bart Selman, professor computertechnologie aan Cornell University, is vorig jaar een open brief opgesteld aan beleidsmakers om erop aan te dringen de risico's in kaart te brengen van de steeds slimmer wordende machines. De brief is ondertekend door 10.000 ondernemers, professoren en technici, waaronder Tesla-oprichter Elon Musk.

    Musk steekt via de non-profitorganisatie OpenAI geld in onderzoek naar kunstmatige intelligentie en hoe mensen er het meest van kunnen profiteren. Hij ziet kunstmatige intelligentie als één van de grootste bedreigingen voor de mensheid.

    Source: RTL Z

  • 2016 wordt het jaar van de kunstmatige intelligentie

    Artificial-intelligence.jpg-1024x678December is traditiegetrouw de periode van het jaar om terug te blikken en oudjaarsdag is daarbij in het bijzonder natuurlijk de beste dag voor. Bij Numrush kijken we echter liever vooruit. Dat deden we begin december al met ons RUSH Magazine. In deze Gift Guide gaven we cadeautips aan de hand van een aantal thema’s waar we komend jaar veel over gaan horen.Eén onderwerp bleef bewust een beetje onderbelicht in onze Gift Guide. Aan de ene kant omdat het niet iets is wat je cadeau geeft, maar ook omdat het eigenlijk de diverse thema’s overstijgt. Ik heb het over kunstmatige intelligentie. Dat is natuurlijk niets nieuws, er is al ontzettend veel gebeurt op dat vlak, maar komend jaar zal de toepassing hiervan nog verder in een stroomversnelling raken.

  • 2017 Investment Management Outlook

    2017 investment management outlook infographic

    Several major trends will likely impact the investment management industry in the coming year. These include shifts in buyer behavior as the Millennial generation becomes a greater force in the investing marketplace; increased regulation from the Securities and Exchange Commission (SEC); and the transformative effect that blockchain, robotic process automation, and other
    emerging technologies will have on the industry.

    Economic outlook: Is a major stimulus package in the offing?

    President-elect Donald Trump may have to depend heavily on private-sector funding to proceed with his $1 trillion infrastructure spending program, considering Congress ongoing reluctance to increase spending. The US economy may be nearing full employment with the younger cohorts entering the labor market as more Baby Boomers retire. In addition, the prospects for a fiscal stimulus seem greater now than they were before the 2016 presidential election.

    Steady improvement and stability is the most likely scenario for 2017. Although weak foreign demand may continue to weigh on growth, domestic demand should be strong enough to provide employment for workers returning to the labor force, as the unemployment rate is expected to remain at approximately 5 percent. GDP annual growth is likely to hit a maximum of 2.5 percent. In the medium term, low productivity growth will likely put a ceiling on the economy, and by 2019, US GDP growth may be below 2 percent, despite the fact that the labor market might be at full employment. Inflation is expected to remain subdued. Interest rates are likely to rise in 2017, but should remain at historically low levels throughout the year. If the forecast holds, asset allocation shifts among cash, commodities, and fixed income may begin by the end of 2017.

    Investment industry outlook: Building upon last year’s performance
    Mutual funds and exchange-traded funds (ETFs) have experienced positive growth. Worldwide regulated funds grew at 9.1 percent CAGR versus 8.6 percent by US mutual funds and ETFs. Non-US investments grew at a slightly faster pace due to global demand. Both worldwide and US investments seem to show declining demand in 2016 as returns remained low.

    Hedge fund assets have experienced steady growth over the past five years, even through performance swings.

    Private equity investments continued a track record of strong asset appreciation. Private equity has continued to attract investment even with current high valuations. Fundraising increased incrementally over the past five years as investors increased allocations in the sector.

    Shifts in investor buying behavior: Here come the Millennials
    Both institutional and retail customers are expected to continue to drive change in the investment management industry. The two customer segments are voicing concerns about fee sensitivity and transparency. Firms that enhance the customer experience and position advice, insight, and expertise as components of value should have a strong chance to set themselves apart from their competitors.

    Leading firms may get out in front of these issues in 2017 by developing efficient data structures to facilitate accounting and reporting and by making client engagement a key priority. On the retail front, the SEC is acting on retail investors’ behalf with reporting modernization rule changes for mutual funds. This focus on engagement, transparency, and relationship over product sales are integral to creating a strong brand as a fiduciary, and they may prove to differentiate some firms in 2017.

    Growth in index funds and other passive investments should continue as customers react to market volatility. Investors favor the passive approach in all environments, as shown by net flows. They are using passive investments alongside active investments, rather than replacing the latter with the former. Managers will likely continue to add index share classes and index-tracking ETFs in 2017, even if profitability is challenged. In addition, the Department of Labor’s new fiduciary rule is expected to promote passive investments as firms alter their product offerings for retirement accounts.

    Members of the Millennial generation—which comprises individuals born between 1980 and 2000—often approach investing differently due to their open use of social media and interactions with people and institutions. This market segment faces different challenges than earlier generations, which influences their use of financial services.

    Millennials may be less prosperous than their parents and may need to own less in order to fully fund retirement. Many start their careers burdened by student debt. They may have a negative memory of recent stock market volatility, distrust financial institutions, favor socially conscious investments, and rely on recommendations from their friends when seeking financial advice.

    Investment managers likely need to consider several steps when targeting Millennials. These include revisiting product lines, offering socially conscious “impact investments,” assigning Millennial advisers to client service teams, and employing digital and mobile channels to reach and serve this market segment.

    Regulatory developments: Seeking greater transparency, incentive alignment, and risk control
    Even with a change in leadership in the White House and at the SEC, outgoing Chair Mary Jo White’s major initiatives are expected to endure in 2017 as they seek to enhance transparency, incentive alignment, and risk control, all of which build confidence in the markets. These changes include the following:

    Reporting modernization. Passed in October 2016, this new requirement of forms, rules, and amendments for information disclosure and standardization will require development by registered investment companies (RICs). Advisers will need technology solutions that can capture data that may not currently exist from multiple sources; perform high-frequency calculations; and file requisite forms with the SEC.

    Liquidity risk management (LRM). Passed in October 2016, this rule requires the establishment of LRM programs by open-end funds (except money market) and ETFs to reduce the risk of inability to meet redemption requirements without dilution of the interests of remaining shareholders.

    Swing pricing. Also passed in October 2016, this regulation provides an option for open-end funds (except money market and ETFs) to adjust net asset values to pass the costs stemming from purchase and redemption activity to shareholders.

    Use of derivatives. Proposed in December 2015, this requires RICs and business development companies to limit the use of derivatives and put risk management measures in place.

    Business continuity and transition plans. Proposed in June 2016, this measure requires registered investment advisers to implement written business continuity and transition plans to address operational risk arising from disruptions.

    The Dodd-Frank Act, Section 956. Reproposed in May 2016, this rule prohibits compensation structures that encourage individuals to take inappropriate risks that may result in either excessive compensation or material loss.

    The DOL’s Conflict-of-Interest Rule. In 2017, firms must comply with this major expansion of the “investment advice fiduciary” definition under the Employee Retirement Income Security Act of 1974. There are two phases to compliance:

    Phase one requires compliance with investment advice standards by April 10, 2017. Distribution firms and advisers must adhere to the impartial conduct standards, provide a notice to retirement investors that acknowledge their fiduciary status, and describes their material conflicts of interest. Firms must also designate a person responsible for addressing material conflicts of interest monitoring advisers' adherence to the impartial conduct standards.

    Phase two requires compliance with exemption requirements by January 1, 2018. Distribution firms must be in full compliance with exemptions, including contracts, disclosures, policies and procedures, and documentation showing compliance.

    Investment managers may need to create new, customized share classes driven by distributor requirements; drop distribution of certain share classes post-rule implementation, and offer more fee reductions for mutual funds.

    Financial advisers may need to take another look at fee-based models, if they are not using already them; evolve their viewpoint on share classes; consider moving to zero-revenue share lineups; and contemplate higher use of ETFs, including active ETFs with a low-cost structure and 22(b) exemption (which enables broker-dealers to set commission levels on their own).

    Retirement plan advisers may need to look for low-cost share classes (R1-R6) to be included in plan options and potentially new low-cost structures.

    Key technologies: Transforming the enterprise

    Investment management poised to become even more driven by advances in technology in 2017, as digital innovations play a greater role than ever before.

    Blockchain. A secure and effective technology for tracking transactions, blockchain should move closer to commercial implementation in 2017. Already, many blockchain-based use cases and prototypes can be found across the investment management landscape. With testing and regulatory approvals, it might take one to two years before commercial rollout becomes more widespread.

    Big data, artificial intelligence, and machine learning. Leading asset management firms are combining big data analytics along with artificial intelligence (AI) and machine learning to achieve two objectives: (1) provide insights and analysis for investment selection to generate alpha, and (2) improve cost effectiveness by leveraging expensive human analyst resources with scalable technology. Expect this trend to gain momentum in 2017.

    Robo-advisers. Fiduciary standards and regulations should drive the adoption of robo-advisers, online investment management services that provide automated, portfolio management advice. Improvements in computing power are making robo-advisers more viable for both retail and institutional investors. In addition, some cutting-edge robo-adviser firms could emerge with AI-supported investment decision and asset allocation algorithms in 2017.

    Robotic process automation. Look for more investment management firms to employ sophisticated robotic process automation (RPA) tools to streamline both front- and back-office functions in 2017. RPA can automate critical tasks that require manual intervention, are performed frequently, and consume a signifcant amount of time, such as client onboarding and regulatory compliance.

    Change, development, and opportunity
    The outlook for the investment management industry in 2017 is one of change, development, and opportunity. Investment management firms that execute plans that help them anticipate demographic shifts, improve efficiency and decision making with technology, and keep pace with regulatory changes will likely find themselves ahead of the competition.

    Download 2017 Investment management industry outlook

    Source: Deloitte.com


  • 5 Predictions for Artificial Intelligence in 2016

    AIGet ready to work alongside smart machines

     At Narrative Science, we love making predictions about innovation, technology and, in particular, the rise of artificial intelligence. We may be a bit too optimistic about the timing of certain technologies going mainstream, but we can’t help it. We are wildly optimistic about the future and genuinely believe that we have entered a dramatically new era of artificial intelligence innovation. That said, this year, we tried to focus our predictions on the near-term. Here’s our best guess as to what will happen in 2016.

    1. New inventions using AI will explode.

    In 2015, artificial intelligence went mainstream. Major tech companies including Google, Facebook, Amazon and Twitter made huge investments in AI, almost all of technology research company Gartner’s strategic predictions included AI, and headlines declared that AI-driven technologies were the next big disruptor to enterprise software. In addition, companies that made huge strides in AI, including Facebook, Microsoft and Google, open-sourced their tools. This makes it likely that in 2016, new inventions will increasingly come to market from companies discovering new ways to apply AI versus building it. With entrepreneurs now having access to low-cost quality AI technologies to create new products, we’ll also likely see an explosion in new startups using AI.

    2. Employees will work alongside smart machines.

    Smart machines will augment work and help employees be more productive, not replace them. Analytics industry leader, Tom Davenport, stated it well when he predicted that “smart leaders will realize that augmentation—combining smart humans with smart machines—is a better strategy than automation.”

    3. Executives will demand transparency.

    Business leaders will realize that smart machines throwing out answers without explanation are of little use. If you walked into a CEO’s office and said we need to shut down three factories, the first question from the CEO would be: “Why?” Just producing a result isn’t enough, and communication capabilities will increasingly be built into advanced analytics and intelligent systems so that these systems can explain how they are arriving at their answers.

    4. Artificial Intelligence will reshape companies outside of IT.

    AI-powered business applications will start to infiltrate companies other than technology firms. Employees, teams and entire departments will champion process re-engineering efforts with these intelligent systems whether they realize it or not. As each individual app eliminates a task, employees will automate many of the mundane parts of their jobs and assemble their own stack of AI-powered apps. Teammates eager to be productive and stay competitive will follow, along with team managers who are looking to execute on cost-cutting efforts.

    5. Innovation labs will become a competitive asset.

    With the pace of innovation accelerating, large organizations in industries such as retail, insurance and government will focus even more energies on remaining competitive and discovering the next big thing by forming innovation labs. Innovation labs have existed for some time, but in 2016, we’ll begin to see more resources devoted to innovation labs and more technologies discovered in the labs actually implemented across different company functions and business lines.

    2016 will be a big year for AI. Much of the work in AI in 2016 will be the catalyst for rapid acceleration of the development and adoption of AI-powered applications. In addition and perhaps even more significant, 2016 will bring about a major shift in the perception of AI. It will cease to be a scary, abstract set of ideas and concepts and will be better understood and accepted as more people realize the potential of AI to augment what we do and make our lives more productive.

    Source: Time

  • A Shortcut Guide to Machine Learning and AI in The Enterprise


    Predictive analytics / machine learning / artificial intelligence is a hot topic – what’s it about?

    Using algorithms to help make better decisions has been the “next big thing in analytics” for over 25 years. It has been used in key areas such as fraud the entire time. But it’s now become a full-throated mainstream business meme that features in every enterprise software keynote — although the industry is battling with what to call it.

    It appears that terms like Data Mining, Predictive Analytics, and Advanced Analytics are considered too geeky or old for industry marketers and headline writers. The term Cognitive Computing seemed to be poised to win, but IBM’s strong association with the term may have backfired — journalists and analysts want to use language that is independent of any particular company. Currently, the growing consensus seems to be to use Machine Learning when talking about the technology and Artificial Intelligence when talking about the business uses.

    Whatever we call it, it’s generally proposed in two different forms: either as an extension to existing platforms for data analysts; or as new embedded functionality in diverse business applications such as sales lead scoring, marketing optimization, sorting HR resumes, or financial invoice matching.

    Why is it taking off now, and what’s changing?

    Artificial intelligence is now taking off because there’s a lot more data available and affordable, powerful systems to crunch through it all. It’s also much easier to get access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.

    Organizations today have also more comfortable with manipulating business data, with a new generation of business analysts aspiring to become “citizen data scientists.” Enterprises can take their traditional analytics to the next level using these new tools.

    However, we’re now at the “Peak of Inflated Expectations” for these technologies according to Gartner’s Hype Cycle — we will soon see articles pushing back on the more exaggerated claims. Over the next few years, we will find out the limitations of these technologies even as they start bringing real-world benefits.

    What are the longer-term implications?

    First, easier-to-use predictive analytics engines are blurring the gap between “everyday analytics” and the data science team. A “factory” approach to creating, deploying, and maintaining predictive models means data scientists can have greater impact. And sophisticated business users can now access some the power of these algorithms without having to become data scientists themselves.

    Second, every business application will include some predictive functionality, automating any areas where there are “repeatable decisions.” It is hard to think of a business process that could not be improved in this way, with big implications in terms of both efficiency and white-collar employment.

    Third, applications will use these algorithms on themselves to create “self-improving” platforms that get easier to use and more powerful over time (akin to how each new semi-autonomous-driving Tesla car can learn something new and pass it onto the rest of the fleet).

    Fourth, over time, business processes, applications, and workflows may have to be rethought. If algorithms are available as a core part of business platforms, we can provide people with new paths through typical business questions such as “What’s happening now? What do I need to know? What do you recommend? What should I always do? What can I expect to happen? What can I avoid? What do I need to do right now?”

    Fifth, implementing all the above will involve deep and worrying moral questions in terms of data privacy and allowing algorithms to make decisions that affect people and society. There will undoubtedly be many scandals and missteps before the right rules and practices are in place.

    What first steps should companies be taking in this area?
    As usual, the barriers to business benefit are more likely to be cultural than technical.

    Above all, organizations need to make sure they have the right technical expertise to be able to navigate the confusion of new vendors offers, the right business knowledge to know where best to apply them, and the awareness that their technology choices may have unforeseen moral implications.

    Source: timoelliot.com, October 24, 2016


  • AI and the risks of Bias

    BIAS cartoon006

    From facial recognition for unlocking our smartphones to speech recognition and intent analysis for voice assistance, artificial intelligence is all around us today. In the business world, AI is helping us uncover new insight from data and enhance decision-making.

    For example, online retailers use AI to recommend new products to consumers based on past purchases. And, banks use conversational AI to interact with clients and enhance their customer experiences.

    However, most of the AI in use now is “narrow AI,” meaning it is only capable of performing individual tasks. In contrast, general AI – which is not available yet – can replicate human thought and function, taking emotions and judgment into account. 

    General AI is still a way off so only time will tell how it will perform. In the meantime, narrow AI does a good job at executing tasks, but it comes with limitations, including the possibility of introducing biases.  

    AI bias may come from incomplete datasets or incorrect values. Bias may also emerge through interactions overtime, skewing the machine’s learning. Moreover, a sudden business change, such as a new law or business rule, or ineffective training algorithms can also cause bias. We need to understand how to recognize these biases, and design, implement and govern our AI applications in order to make sure the technology generates its desired business outcomes.

    Recognize and evaluate bias – in data samples and training

    One of the main drivers of bias is the lack of diversity in the data samples used to train an AI system. Sometimes the data is not readily available or it may not even exist, making it hard to address all potential use cases.

    For instance, airlines routinely apply sensor data from in-flight aircraft engines through AI algorithms to predict needed maintenance and improve overall performance. But if the machine is trained with only data from flights over the Northern Hemisphere and then applied to a flight across sub-Saharan Africa, the conditions will provide inaccurate results. We need to evaluate the data used to train these systems and strive for well-rounded data samples.

    Another driver of bias is incomplete training algorithms. For example, a chatbot designed to learn from conversations may be exposed to politically incorrect language. Unless trained not to, the chatbot may start using the same language with consumers, which Microsoft unfortunately learned in 2016 with its now-defunct Twitter bot, “Tay.” If a system is incomplete or skewed through learning like Tay, then teams have to adjust the use case and pivot as needed.

    Rushed training can also lead to bias. We often get excited about introducing AI into our businesses so naturally want to start developing projects and see some quick wins. 

    However, early applications can quickly expand beyond their intended purpose. Given that current AI cannot cover the gamut of human thought and judgement, eliminating emerging biases becomes a necessary task. Therefore, people will continue to be important in AI applications. Only people have the domain knowledge – acquired industry, business, and customer knowledge – needed to evaluate the data for biases and train the models accordingly.

    Diversify datasets and the teams working with AI

    Diversity is the key to mitigating AI biases – diversity in the datasets and the workforce working day to day with the models. As stated above, we need to have comprehensive, well-rounded datasets that can broadly cover all possible use cases. If there is underrepresented or disproportionate internal data, such as if the AI only has homogenous datasets, then external sources may fill in the gaps in information. This gives the machine a richer pool of data to learn and work with – and leads to predictions that are far more accurate. 

    Likewise, diversity in the teams working with AI can help mitigate bias. When there is only a small group within one department working on an application, it is easy for the thinking of these individuals to influence the system’s design and algorithms. Starting with a diverse team or introducing others into an existing group can make for a much more holistic solution. A team with varying skills, thinking, approaches and backgrounds is better equipped to recognize existing AI bias and anticipate potential bias. 

    For example, one bank used AI to automate 80 percent of its financial spreading process for public and private companies. It involved extracting numbers out of documents and formatting them into templates, while logging each step along the way. To train the AI and make sure the system pulled the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decisioning expertise. Today, it applies AI to spreading on 45,000 customer accounts across 35 countries.

    Consider emerging biases and preemptively train the machine

    While AI can introduce biases, proper design (including the data samples and models) and thoughtful usage (such as governance over the AI’s learning) can help reduce and prevent them. And, in many situations, AI can actually minimize bias that would otherwise be present in human decision-making. An objective algorithm can compensate for the natural bias that a human might introduce such in approving a customer for a loan based on their appearance.

    In recruiting, an AI program can review job descriptions to eliminate unconscious gender biases by flagging and removing words that may be construed as more masculine or feminine, and replacing them with more neutral terms. It is important to note that a domain expert needs to go in and make sure the changes are still accurate, but the system can recognize things that people could miss. 

    Bias is an unfortunate reality in today’s AI applications. But by evaluating the data samples and training algorithms and making sure that both are comprehensive and complete, we can mitigate unintended biases. We need to task diverse teams with governing the machines to prevent unwanted outcomes. With the right protocol and measures, we can ensure that AI delivers on its promise and yields the best business results

     Author: Sanjay Srivastava

    Source: Information Management

  • AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    Artificial Intelligence (AI) zou het belangrijkste aspect moeten zijn van een datastrategie. Dat vindt meer dan 60 procent van de marketeers, blijkt uitonderzoek van MemSQL. Maar het daadwerkelijk inzetten van AI blijkt een ander verhaal. Hoe kunnen bedrijven AI omzetten in een succesvolle strategie? Hier volgen 8 tips voor marketeers:

    1. Recommendation engines

    Richt je op upselling door recommendation engines in te zetten. Recommendation engines zijn gebouwd om te voorspellen wat gebruikers op basis van hun zoektermen verder interessant zouden kunnen vinden, met name als er veel keuze is. Recommendatin engines tonen gebruikers informatie of inhoud die ze anders misschien niet hadden gezien, wat uiteindelijk kan leiden tot hogere inkomsten uit meer verkopen. Naarmate er meer bekend is over een bezoeker, is een steeds betere aanbeveling te doen en daarmee wordt de verkoopkans steeds groter. Zo is meer dan 80 procent van de programma’s die mensen kijken op Netflix door hen gevonden via de recommendation engine. Hoe dit werkt? Ten eerste verzamelt Netflix alle data van zijn gebruikers. Wat kijken ze? Wat keken ze vorig jaar? Welke series kijken na elkaar? En ga zo maar door. Bovendien is er een groep freelance en in house taggers actief, die alle content van beoordelingen en tags voorzien. Speelt een serie zich af in de ruimte of is de held een politieman? Alles krijgt een tag. Vervolgens worden machine learning algoritmes losgelaten op deze gecombineerde data en worden kijkers opgedeeld in meer dan 2000 verschillende ‘smaakgroepen’. De groep waarin een gebruiker is ingedeeld bepaalt welke kijkvoorstellen hij/zij krijgt.

    2. Forecasting

    Goede salesprognoses helpen bedrijven te groeien. Maar voorspellingen worden al jarenlang door mensen gedaan, terwijl emoties een kwartaal kunnen maken of breken. Zonder wetenschap zijn voorspellingen vaak ofwel overdreven optimistisch, ofwel overdreven pessimistisch. AI kan helpen met forecasting louter gebaseerd op gegevens en feiten. Deze gevgevens en feiten zijn met dank aan AI ook uit te leggen, waardoor bedrijven kunnen leren van eerdere voorspellingen en de volgende prognose alleen maar nauwkeuriger wordt.

    3. Ga ‘churn’ tegen

    Zoals iedere marketeer weet is het werven van nieuwe klanten veel duurder dan het behouden van de huidige klanten. Maar hoe voorkom je dat klanten zich uitschrijven voor je diensten of kiezen voor andere oplossingen? Zorg dat je klanten die de website willen verlaten steeds beter begrijpt en hun gedrag kunt voorspellen, want daarmee is klantverlies te minimaliseren. Wanneer je klanten die op het punt staan je website te verlaten effectief aanspreekt, vergroot je de kans op conversie. Door met behulp van AI een voorspellend analysemodel te bouwen dat potentiële ‘churners’ detecteert en hier vervolgens een marketingcampagne op in te zetten, voorkom je klantverlies en kun je veranderingen in je product aanbrengen om churn tegen te gaan.

    4. Content generation

    Content blijft koning. En daar kun je op inspelen met Natural Language Processing (NLP). Dit is de vaardigheid van een computerprogramma om menselijke taal te begrijpen. NLP zal zich in de nabije toekomst steeds verder ontwikkelen en wordt meer mainstream. Doordat computers taal steeds beter begrijpen, kan simpele content steeds beter automatisch gegenereerd worden. Dat content enorm belangrijk blijft, blijkt uit onderzoek van het Content Marketing Institute (CMI). Content marketing blijkt wel drie keer zo veel leads per uitgegeven dollar op te leveren als het genereren van betaalde zoekopdrachten! Bovendien kost content marketing minder terwijl het tegelijkertijd grotere langetermijnvoordelen biedt.

    5. Hyper-Targeted advertising

    Klanten hebben steeds meer toegang tot informatie en worden met een overschot aan keuzes minder loyaal aan een product of merk. De klantervaring die een bedrijf biedt is steeds belangrijker, dus ook advertenties moeten aanvoelen als een persoonlijk aanbod. Uit onderzoek van SalesForce blijkt dat 51 procent van de consumenten verwacht dat bedrijven rond 2020 zullen anticiperen op hun behoeften en actief relevante suggesties doen, oftewel hyper-targeted advertising inzetten. Zet daarom AI in voor data-driven klantsegmentatie en maak advertenties steeds relevanter per doelgroep.

    6. Prijsoptimalisatie

    McKinsey schat dat zo’n 30% van alle prijsbeslissingen die bedrijven elk jaar maken niet leiden tot de optimale prijs. Om concurrerend te blijven is het van belang continu het evenwicht te vinden tussen wat klanten willen betalen voor een product/dienst en wat de winstmarges aan kunnen. Grote bedrijven tonen aan dat prijsoptimalisatie vaak cruciaal is voor hun succes. Naar verluidt wijzigt Walmart zijn prijzen wel meer dan 50.000 keer per maand. Door met behulp van AI dynamische prijsbepaling in te zetten, zijn prijzen continu te updaten op basis van veranderende factoren en ben je niet meer afhankelijk van statische gegevens.

    7. Scoor betere leads

    Zet voorspellende lead scoring in om betere leads te scoren en daarmee alle pijlen te richten op diegenen die het meest waarschijnlijk zullen kopen. Uit een IDC-enquête blijkt dat 83 procent van de bedrijven voorspellende lead scoring voor verkoop en marketing al gebruikt of van plan is te gebruiken. En met de hulp van AI is daar een enorme slag in te slaan. Voorspellende lead scoring is speciaal ontwikkeld om te bepalen welke criteria bij een goede lead horen. Het maakt gebruik van algoritmes die vast kunnen stellen welke eigenschappen geconverteerde leads en niet-geconverteerde leads met elkaar gemeen hebben. Met die kennis kan lead scoring-software verschillende modellen voor voorspellende lead scoring maken en testen, en vervolgens automatisch het model kiezen dat het meest geschikt is voor een set voorbeeldgegevens. Omdat lead scoring-software ook machine learning gebruikt worden lead scores steeds nauwkeuriger.

    8. Marketingattributie

    En tot slot: begrijp tot op de details waar de beste (en slechtste) conversies vandaan komen, zodat je hiermee aan de slag kunt gaan. Met conversieattributie is goed te meten via welke website, zoekmachine, advertentie etc. een bezoeker op jouw website kwam en hier wel of niet een bestelling plaatste. Met behulp va machine learning kun je een slimmer marketingattributiesysteem bouwen, waarmee precies geïdentificeerd kan worden wat individuen beïnvloedt om gewenst gedrag te vertonen. In dit geval is overgaan tot koop het gewenste gedrag. Een goed marketingattributiesysteem met behulp van AI kan dus zorgen voor meer conversie.

    Auteur: Hylke Visser

    Bron: Emerce

  • Artificial intelligence: Can Watson save IBM?

    160104-Cloud-800x445The history of artificial intelligence has been marked by seemingly revolutionary moments — breakthroughs that promised to bring what had until then been regarded as human-like capabilities to machines. The AI highlights reel includes the “expert systems” of the 1980s and Deep Blue, IBM’s world champion-defeating chess computer of the 1990s, as well as more recent feats like the Google system that taught itself what cats look like by watching YouTube videos.

    But turning these clever party tricks into practical systems has never been easy. Most were developed to showcase a new computing technique by tackling only a very narrow set of problems, says Oren Etzioni, head of the AI lab set up by Microsoft co-founder Paul Allen. Putting them to work on a broader set of issues presents a much deeper set of challenges.
    Few technologies have attracted the sort of claims that IBM has made for Watson, the computer system on which it has pinned its hopes for carrying AI into the general business world. Named after Thomas Watson Sr, the chief executive who built the modern IBM, the system first saw the light of day five years ago, when it beat two human champions on an American question-and-answer TV game show, Jeopardy!
    But turning Watson into a practical tool in business has not been straightforward. After setting out to use it to solve hard problems beyond the scope of other computers, IBM in 2014 adapted its approach.
    Rather than just selling Watson as a single system, its capabilities were broken down into different components: each of these can now be rented to solve a particular business problem, a set of 40 different products such as language-recognition services that amount to a less ambitious but more pragmatic application of an expanding set of technologies.
    Though it does not disclose the performance of Watson separately, IBM says the idea has caught fire. John Kelly, an IBM senior vice-president and head of research, says the system has become “the biggest, most important thing I’ve seen in my career” and is IBM’s fastest growing new business in terms of revenues.
    But critics say that what IBM now sells under the Watson name has little to do with the original Jeopardy!-playing computer, and that the brand is being used to create a halo effect for a set of technologies that are not as revolutionary as claimed.

    “Their approach is bound to backfire,” says Mr Etzioni. “A more responsible approach is to be upfront about what a system can and can’t do, rather than surround it with a cloud of hype.”
    Nothing that IBM has done in the past five years shows it has succeeded in using the core technology behind the original Watson demonstration to crack real-world problems, he says.

    Watson’s case
    The debate over Watson’s capabilities is more than just an academic exercise. With much of IBM’s traditional IT business shrinking as customers move to newer cloud technologies, Watson has come to play an outsized role in the company’s efforts to prove that it is still relevant in the modern business world. That has made it key to the survival of Ginni Rometty, the chief executive who, four years after taking over, is struggling to turn round the company.
    Watson’s renown is still closely tied to its success on Jeopardy! “It’s something everybody thought was ridiculously impossible,” says Kris Hammond, a computer science professor at Northwestern University. “What it’s doing is counter to what we think of as machines. It’s doing something that’s remarkably human.”

    By divining the meaning of cryptically worded questions and finding answers in its general knowledge database, Watson showed an ability to understand natural language, one of the hardest problems for a computer to crack. The demonstration seemed to point to a time when computers would “understand” complex information and converse with people about it, replicating and eventually surpassing most forms of human expertise.
    The biggest challenge for IBM has been to apply this ability to complex bodies of information beyond the narrow confines of the game show and come up with meaningful answers. For some customers, this has turned out to be much harder than expected.
    The University of Texas’s MD Anderson Cancer Center began trying to train the system three years ago to discern patients’ symptoms so that doctors could make better diagnoses and plan treatments.
    “It’s not where I thought it would go. We’re nowhere near the end,” says Lynda Chin, head of innovation at the University of Texas’ medical system. “This is very, very difficult.” Turning a word game-playing computer into an expert on oncology overnight is as unlikely as it sounds, she says.

    Part of the problem lies in digesting real-world information: reading and understanding reams of doctors’ notes that are hard for a computer to ingest and organise. But there is also a deeper epistemological problem. “On Jeopardy! there’s a right answer to the question,” says Ms Chin but, in the
    medical world, there are often just well-informed opinions.
    Mr Kelly denies IBM underestimated how hard challenges like this would be and says a number of medical organisations are on the brink of bringing similar diagnostic systems online.

    Applying the technology
    IBM’s initial plan was to apply Watson to extremely hard problems, announcing in early press releases “moonshot” projects to “end cancer” and accelerate the development of Africa. Some of the promises evaporated almost as soon as the ink on the press releases had dried. For instance, a far-reaching partnership with Citibank to explore using Watson across a wide range of the bank’s activities, quickly came to nothing.
    Since adapting in 2014, IBM now sells some services under the Watson brand. Available through APIs, or programming “hooks” that make them available as individual computing components, they include sentiment analysis — trawling information like a collection of tweets to assess mood — and personality tracking, which measures a person’s online output using 52 different characteristics to come up with a verdict.

    At the back of their minds, most customers still have some ambitious “moonshot” project they hope that the full power of Watson will one day be able to solve, says Mr Kelly; but they are motivated in the short term by making improvements to their business, which he says can still be significant.
    This more pragmatic formula, which puts off solving the really big problems to another day, is starting to pay dividends for IBM. Companies like Australian energy group Woodside are using Watson’s language capabilities as a form of advanced search engine to trawl their internal “knowledge bases”. After feeding more than 20,000 documents from 30 years of projects into the system, the company’s engineers can now use it to draw on past expertise, like calculating the maximum pressure that can be used in a particular pipeline.
    To critics in the AI world, the new, componentised Watson has little to do with the original breakthrough and waters down the technology. “It feels like they’re putting a lot of things under the Watson brand name — but it isn’t Watson,” says Mr Hammond.
    Mr Etzioni goes further, claiming that IBM has done nothing to show that its original Jeopardy!-playing breakthrough can yield results in the real world. “We have no evidence that IBM is able to take that narrow success and replicate it in broader settings,” he says. Of the box of tricks that is now sold under the Watson name, he adds: “I’m not aware of a single, super-exciting app.”

    To IBM, though, such complaints are beside the point. “Everything we brand Watson analytics is very high-end AI,” says Mr Kelly, involving “machine learning and high-speed unstructured data”. Five years after Jeopardy! the system has evolved far beyond its original set of tricks, adding capabilities such as image recognition to expand greatly the range of real-world information it can consume and process.

    Adopting the system
    This argument may not matter much if the Watson brand lives up to its promise. It could be self-fulfilling if a number of early customers adopt the technology and put in the work to train the system to work in their industries, something that would progressively extend its capabilities.

    Another challenge for early users of Watson has been knowing how much trust to put in the answers the system produces. Its probabilistic approach makes it very human-like, says Ms Chin at MD Anderson. Having been trained by experts, it tends to make the kind of judgments that a human would, with the biases that implies.
    In the business world, a brilliant machine that throws out an answer
    to a problem but cannot explain itself will be of little use, says Mr Hammond. “If you walk into a CEO’s office and say we need to shut down three factories and sack people, the first thing the CEO will say is: ‘Why?’” He adds: “Just producing a result isn’t enough.”
    IBM’s attempts to make the system more transparent, for instance by using a visualisation tool called WatsonPaths to give a sense of how it reached a conclusion, have not gone far enough, he adds.
    Mr Kelly says a full audit trail of Watson’s decision-making is embedded in the system, even if it takes a sophisticated user to understand it. “We can go back and figure out what data points Watson connected” to reach its answer, he says.

    He also contrasts IBM with other technology companies like Google and Facebook, which are using AI to enhance their own services or make their advertising systems more effective. IBM is alone in trying to make the technology more transparent to the business world, he argues: “We’re probably the only ones to open up the black box.”
    Even after the frustrations of wrestling with Watson, customers like MD Anderson still believe it is better to be in at the beginning of a new technology.
    “I am still convinced that the capability can be developed to what we thought,” says Ms Chin. Using the technology to put the reasoning capabilities of the world’s oncology experts into the hands of other doctors could be far-reaching: “The way Amazon did for retail and shopping, it will change what care delivery looks like.”
    Ms Chin adds that Watson will not be the only reasoning engine that is deployed in the transformation of healthcare information. Other technologies will be needed to complement it, she says.
    Five years after Watson’s game show gimmick, IBM has finally succeeded in stirring up hopes of an AI revolution in business. Now, it just has to live up to the promises.

    Source: Financial Times

  • Big Data Predictions for 2016

    A roundup of big data and analytics predictions and pontifications from several industry prognosticators.

    At the end of each year, PR folks from different companies in the analytics industry send me predictions from their executives on what the next year holds. This year, I received a total of 60 predictions from a record 17 companies. I can't laundry-list them all, but I can and did put them in a spreadsheet (irony acknowledged) to determine the broad categories many of them fall in. And the bigger of those categories provide a nice structure to discuss many of the predictions in the batch.

    Predictions streaming in
    MapR CEO John Shroeder, whose company just added its own MapR Streams component to its Hadoop distribution, says "Converged Approaches [will] Become Mainstream" in 2016. By "converged," Schroeder is alluding to the simultaneous use of operational and analytical technologies. He explains that "this convergence speeds the 'data to action' cycle for organizations and removes the time lag between analytics and business impact."

    The so-called "Lambda Architecture" focuses on this same combination of transactional and analytical processing, though MapR would likely point out that a "converged" architecture co-locates the technologies and avoids Lambda's approach of tying the separate technologies together.

    Whether integrated or converged, Phu Hoang, the CEO of DataTorrent predicts 2016 will bring an ROI focus to streaming technologies, which he summarizes as "greater enterprise adoption of streaming analytics with quantified results." Hoang explains that "while lots of companies have already accepted that real-time streaming is valuable, we'll see users looking to take it one step further to quantify their streaming use cases."

    Which industries will take charge here? Hoang says "FinTech, AdTech and Telco lead the way in streaming analytics." That makes sense, but I think heavy industry is, and will be, in a leadership position here as well.

    In fact, some in the industry believe that just about everyone will formulate a streaming data strategy next year. One of those is Anand Venugopal of Impetus Technologies, who I spoke with earlier this month. Venugopa, in fact, feels that we are within two years of streaming data becoming looked upon just another data source.

    Internet of predicted things
    It probably won't shock you that the Internet of Things (IoT) was a big theme in this year's round of predictions. Quentin Gallivan, Pentaho's CEO, frames the thoughts nicely with this observation: "Internet of Things is getting real!" Adam Wray, CEO at Basho, quips that "organizations will be seeking database solutions that are optimized for the different types of IoT data." That might sound a bit self-serving, but Wray justifies this by reasoning that this will be driven by the need to "make managing the mix of data types less operationally complex." That sounds fair to me.

    Snehal Antani, CTO at Splunk, predicts that "Industrial IoT will fundamentally disrupt the asset intelligence industry." Suresh Vasudevan, the CEO of Nimble Storage proclaims "in 2016 the IoT invades the datacenter." That may be, but IoT technologies are far from standardized, and that's a barrier to entry for the datacenter. Maybe that's why the folks at DataArt say "the IoT industry will [see] a year of competition, as platforms strive for supremacy." Maybe the data center invasion will come in 2017, then.

    Otto Berkes, CTO at CA Technologies, asserts that "Bitcoin-born Blockchain shows it can be the storage of choice for sensors and IoT." I hardly fancy myself an expert on blockchain technology, so I asked CA for a little more explanation around this one. A gracious reply came back, explaining that "IoT devices using this approach can transact directly and securely with each other...such a peer-to-peer configuration can eliminate potential bottlenecks and vulnerabilities." That helped a bit, and it incidentally shines a light on just how early-stage IoT technology still is, with respect to security and distributed processing efficiencies.

    Growing up
    Though admittedly broad, the category with the most predictions centered on the theme of value and maturity in Big Data products supplanting the fascination with new features and products. Essentially, value and maturity are proxies for the enterprise-readiness of Big Data platforms.

    Pentaho's Gallivan says that "the cool stuff is getting ready for prime time." MapR's Schroeder predicts "Shiny Object Syndrome Gives Way to Increased Focus on Fundamental Value," and qualifies that by saying "...companies will increasingly recognize the attraction of software that results in business impact, rather than focusing on raw big data technologies." In a related item, Schroeder predicts "Markets Experience a Flight to Quality," further stating that "...investors and organizations will turn away from volatile companies that have frequently pivoted in their business models."

    Sean Ma, Trifacta's Director of Product Management, looking at the manageability and tooling side of maturity, predicts that "Increasing the amount of deployments will force vendors to focus their efforts on building and marketing management tools." He adds: "Much of the capabilities in these tools...will need to replicate functionality in analogous tools from the enterprise data warehouse space, specifically in the metadata management and workflow orchestration." That's a pretty bold prediction, and Ma's confidence in it may indicate that Trifacta has something planned in this space. But even if not, he's absolutely right that this functionality is needed in the Big Data world. In terms of manageability, Big Data tooling needs to achieve not just parity with data warehousing and BI tools, but needs to surpass that level.

    The folks at Signals say "Technology is Rising to the Occasion" and explain that "advances in artificial intelligence and an understanding [of] how people work with data is easing the collaboration between humans and machines necessary to find meaning in big data." I'm not sure if that is a prediction, or just wishful thinking, but it certainly is the way things ought to be. With all the advances we've made in analyzing data using machine learning and intelligence, we've left the process of sifting through the output a largely manual process.

    Finally, Mike Maciag, the COO at AltiScale, asserts this forward-looking headline: "Industry standards for Hadoop solidify." Maciag backs up his assertion by pointing to the Open Data Platform initiative (ODPi) and its work to standardize Hadoop distributions across vendors. ODPi was originally anchored by Hortonworks, with numerous other companies, including AltiScale, IBM and Pivotal, jumping on board. The organization is now managed under the auspices of the Linux Foundation.

    Artificial flavor
    Artificial Intelligence (AI) and Machine Learning (ML) figured prominently in this year's predictions as well. Splunk's Antani reasons that "Machine learning will drastically reduce the time spent analyzing and escalating events among organizations." But Lukas Biewald, Founder and CEO of Crowdflower insists that "machines will automate parts of jobs -- not entire jobs." These two predictions are not actually contradictory. I offer both of them, though, to point out that AI can be a tool without being a threat.

    Be that as it may, Biewald also asserts that "AI will significantly change the business models of companies today." He expands on this by saying "legacy companies that aren't very profitable and possess large data sets may become more valuable and attractive acquisition targets than ever." In other words, if companies found gold in their patent portfolios previously, they may find more in their data sets, as other companies acquire them to further their efforts in AI, ML and predictive modeling.

    And more
    These four categories were the biggest among all the predictions but not the only ones, to be sure. Predictions around cloud, self-service, flash storage and the increasing prominence of the Chief Data Officer were in the mix as well. A number of predictions that stood on their own were there too, speaking to issues as far-reaching as salaries for Hadoop admins to open source, open data and container technology.

    What's clear from almost all the predictions, though, is that the market is starting to take basic big data technology as a given, and is looking towards next-generation integration, functionality, intelligence, manageability and stability. This implies that customers will demand certain baseline data and analytics functionality to be part of most technology solutions going forwards. And that's a great sign for everyone involved in Big Data.

    Source: ZDNet


  • Bol.com: machine learning om vraag en aanbod beter bij elkaar te brengen

    0cd4fbcf0a4f81814f388a75109da149ca643f45Een online marktplaats is een concept dat e-commerce in toenemende mate blijft adopteren. Naast consumer-to-consumer marktplaatsen zoals Marktplaats.nl, zijn er uiteraard ook business-to-consumer marktplaatse waarbij een online platform de vraag van consumenten en het aanbod van leveranciers bij elkaar brengt.

    Sommige marktplaatsen hebben geen eigen assortiment: hun aanbod bestaat voor 100 procent uit aangesloten leveranciers, denk bijvoorbeeld aan Alibaba. Bij Amazon bedraagt het aandeel van eigen producten 50 procent. Ook bol.com heeft een eigen marktplaatsen: ’Verkopen via Bol.com’. Deze draagt bij aan miljoenen extra artikelen in het assortiment van Bol.com.

    Bewaken van contentkwaliteit

    Er komt veel kijken bij het managen van zo’n marktplaats. Het doel is duidelijk: ervoor zorgen dat de vraag en het aanbod zo snel mogelijk bij elkaar komen, zodat de klant direct een aantal producten krijgt aangeboden die voor hem relevant zijn. En met miljoenen klanten aan de ene kant en miljoenen producten van duizenden leveranciers aan de andere kant, is dat natuurlijk een hele klus.

    Jens legt uit: “Het begint bij de standaardisatie van informatie aan zowel de vraag- als de aanbodkant. Bijvoorbeeld, als je als leverancier een cd van Tsjaikovski of een bril van Dolce & Gabbana bij bol.com wilt aanbieden, dan zijn er vele schrijfwijzen mogelijk. Voor een verkoopplatform als ‘Verkopen via bol.com’ is de kwaliteit van de data cruciaal. Het in stand houden van de kwaliteit van de content is dus een van de uitdagingen.

    Aan de andere kant van de transactie zijn er natuurlijk klanten van bol.com die ook allerlei variaties van termen, zoals de namen van merken, in het zoekveld intypen. Daarnaast wordt er in toenemende mate gezocht op generieke termen als ‘cadeau voor huwelijk’ of ‘spullen voor een feestje’.

    Vraag en aanbod bij elkaar brengen

    Naarmate het assortiment groter wordt, wat het geval is, en de klanten steeds ‘generieker’ gaan zoeken, is het steeds uitdagender om een match te maken en relevantie hoog te houden. Door het volume van deze ongestructureerde data en het feit dat ze realtime geanalyseerd moeten worden, kun je die match niet met de hand maken. Je moet hiervoor de data slim kunnen inzetten. En dat is een van de activiteiten waar het customer intelligence team van bol.com, een onderdeel van customer centric selling-afdeling, mee bezig is.

    Jens: “De truc is om het gedrag van klanten op de website te vertalen naar contentverbeteringen. Door de woorden (en woordcombinaties) die klanten gebruiken om artikelen te zoeken en producten die uiteindelijk gekocht zijn te analyseren en met elkaar te matchen, kunnen synoniemen voor desbetreffende producten worden gecreëerd. Dankzij deze synoniemen gaat de relevantie van de zoekresultaten omhoog en help je dus de klant om het product sneller te vinden. Bovendien snijdt het mes snijdt aan twee kanten, omdat tegelijkertijd de kwaliteit van de productcatalogus wordt verbeterd. Denk hierbij aan verfijning van verschillende kleurbeschrijvingen (WIT, Wit, witte, white, etc.).

    Algoritmes worden steeds slimmer

    Het bovenstaande proces verloopt nog semi-automatisch (met terugwerkende kracht), maar de ambitie is om het in de toekomst volledig geautomatiseerd plaats te laten vinden. Om dat te kunnen doen worden er op dit moment stap voor stap machinelearningtechnieken geïmplementeerd. Als eerste is er geïnvesteerd in technologieën om grote volumes van ongestructureerde data zeer snel te kunnen verwerken. Bol.com bezit twee eigen datacenters met tientallen clusters.

    “Nu wordt er volop geëxperimenteerd om deze clusters in te zetten voor het verbeteren van het zoekalgoritme, het verrijken van de content en standaardisatie”, geeft Jens aan. “En dat levert uitdagingen op. Immers, als je doorslaat in standaardisatie, dan kom je in een selffulfilling prophecy terecht. Maar gelukkig nemen de algoritmes het beetje bij beetje over en worden ze steeds slimmer. Nu probeert het algoritme de zoekterm zelf aan een product te koppelen en legt het deze aan diverse interne specialisten voor. Concreet geformuleerd: de specialisten krijgen te zien dat ‘de kans 75 procent is dat de klant dit bedoelt’. Die koppeling wordt vervolgens handmatig gevalideerd. De terugkoppeling van deze specialisten over een voorgestelde verbetering levert belangrijke input voor algoritmes om informatie nog beter te kunnen verwerken. Je ziet dat de algoritmes steeds beter hun werk doen.”

    Toch levert dit voor Jens en zijn team een volgende kwestie op: waar leg je de grens waarbij het algoritme zelf de beslissing kan nemen? Is dat bij 75 procent? Of moet alles onder de 95 procent door menselijk inzicht gevalideerd worden?

    Een betere winkel maken voor onze klanten met big data

    Drie jaar geleden was big data een onderwerp waarover voornamelijk in PowerPoint‑slides gesproken werd. Tegenwoordig hebben vele (grotere) e-commercebedrijven een eigen Hadoop-cluster. Het is de volgende stap om met big data de winkel écht beter te maken voor klanten en bij bol.com wordt daar hard aan gewerkt. In 2010 is bij het bedrijf overgestapt van ‘massamediale’ naar ‘persoonlijk relevante’ campagnevoering, waarbij er in toenemende mate gepoogd wordt om op basis van diverse ‘triggers’ een persoonlijke boodschap aan de klant te bieden, real-time.

    Die triggers (zoals bezochte pagina’s of bekeken producten) wegen steeds zwaarder dan historische gegevens (wie is de klant en wat heeft deze in verleden gekocht).

    “Als je inzicht krijgt in relevante triggers en niet‑relevante weglaat”, stelt Jens, “dan kun je de consument beter bedienen door bijvoorbeeld de meest relevante review te tonen, een aanbieding te doen of een selectie vergelijkbare producten te maken. Op deze manier sluit je beter aan bij de klantreis en is de kans steeds groter dat de klant bij je vind wat hij zoekt.”

    En dat doet bol.com door eerst, op basis van het gedrag op de website, maar ook op basis van de beschikbare voorkeuren van de klant, op zoek te gaan naar de relevante triggers. Nadat deze aan de content zijn gekoppeld, zet bol.com A/B‑testen in om de conversie te analyseren om het uiteindelijk wel of niet definitief door te voeren. Immers, elke wijziging moet resulteren in hogere relevantie.

    Er komen uiteraard verschillende technieken bij kijken om ongestructureerde data te kunnen analyseren en hier zijn zowel slimme algoritmes als menselijk inzicht voor nodig. Jens: “Gelukkig zijn bij ons niet alleen de algoritmes zelflerend, maar ook het bedrijf, dus het proces gaat steeds sneller en beter.”


    Outsourcen of alles in-house doen is een strategische beslissing. Bol.com koos voor het laatste. Uiteraard wordt er nog op ad-hocbasis gebruikgemaakt van de kennis uit de markt als dat helpt om processen te versnellen. Data-analisten en data scientists zijn een belangrijk onderdeel van het groeiende customer centric selling team.

    Het verschil spreekt voor zich: data-analisten zijn geschoold in ‘traditionele’ tools als SPSS en SQL en doen analysewerk. Data scientists hebben een grotere conceptuele flexibiliteit en kunnen daarnaast programmeren in onder andere Java, Python en Hive. Uiteraard zijn er doorgroeimogelijkheden voor ambitieuze data-analisten, maar toch wordt het steeds lastiger om data scientists te vinden.

    Hoewel er in de markt keihard gewerkt wordt om het aanbod uit te breiden; hebben we hier vooralsnog met een kleine, selecte groep professionals te maken. Bol.com doet er alles aan om de juiste mensen te werven en op te leiden. Eerst wordt een medewerker met het juiste profiel binnengehaald; denk aan iemand die net is afgestudeerd in artificial intelligence, technische natuurkunde of een andere exacte wetenschap. Vervolgens wordt deze kersverse data scientist onder de vleugels van een van de ervaren experts uit het opleidingsteam van bol.com genomen. Training in computertalen is hier een belangrijk onderdeel van en verder is het vooral learning-by-doing.

    Mens versus machine

    Naarmate de algoritmes steeds slimmer worden en artificial‑intelligencetechnologieën steeds geavanceerder, zou je denken dat het tekort aan data scientists tijdelijk is: de computers nemen het over.

    Dat is volgens Jens niet het geval: “Je zult altijd behoefte blijven houden aan menselijk inzicht. Alleen, omdat de machines steeds meer routinematig en gestandaardiseerd analysewerk overnemen, kun je steeds meer gaan doen. Bijvoorbeeld, niet de top 10.000 zoektermen verwerken, maar allemaal. Feitelijk kun je veel meer de diepte én de breedte in. En dus is de impact van jouw werk op de organisatie vele malen groter. Het resultaat? De klant wordt beter geholpen en hij bespaart tijd omdat hij steeds relevantere informatie krijgt en daarom meer engaged is. En brengt ons ook steeds verder in onze ambitie om onze klanten de beste winkel te bieden die er bestaat.”

    Klik hiervoor het hele rapport.

    Source: Marketingfacts

  • Digitale technologieën leveren Europees bedrijfsleven komende twee jaar 545 miljard euro op

    925609982sEuropese bedrijven kunnen dankzij het toepassen van digitale tools en technologieën een omzetstijging van 545 miljard euro behalen in de komende twee jaar. Voor Nederlandse bedrijven ligt dit bedrag op 23,5 miljard euro. Dat blijkt uit een onderzoek van Cognizant in samenwerking met Roubini Global Economics onder ruim 800 Europese bedrijven.
    Het onderzoek The Work Ahead – Europe’s Digital Imperative maakt onderdeel uit van een wereldwijd onderzoek waarin het veranderende karakter van werk in het digitale tijdperk wordt onderzocht. De resultaten tonen aan dat organisaties die het meest proactief zijn in het dichter bij elkaar brengen van de fysieke en virtuele wereld, de grootste kans hebben om meer omzet te behalen.
    Omzetpotentieel benutten
    Leidinggevenden geven aan dat technologieën als Artificial Intelligence (AI), Big Data en blockchain een bron kunnen zijn voor nieuwe businessmodellen en inkomststromen, veranderende klantrelaties en lagere kosten. Sterker nog, de ondervraagden verwachten dat digitale technologieën een positief effect van 8,4 procent zullen hebben op de omzet tussen nu en 2018.
    Digitalisering kan voor zowel kostenefficiëntie als omzetstijging zorgen. Door bijvoorbeeld intelligent process automation (IPA) toe te passen – waarbij software-robots routinetaken overnemen – kunnen bedrijven kosten besparen in de middle en backoffice. Uit de analyse blijkt dat de impact van digitale transformatie op omzet en kostenbesparing in de onderzochte industrieën (retail, financiële diensten, verzekeringen1, maakindustrie en life sciences) uitkomt op 876 miljoen euro in 2018.
    Nog steeds achterblijvers op digitaal gebied
    Europese executives verwachten dat een digitale economie gestimuleerd zal worden door een combinatie van data, algoritmes, software-robots en connected devices. Gevraagd naar welke technologie de grootste invloed zal hebben op het werk in 2020, komt Big Data als winnaar naar voren. Maar liefst 99 procent van de respondenten noemt deze technologie. Opvallend is dat AI vlak daarna met 97 procent op een tweede plek eindigt; respondenten beschouwen AI als meer dan een hype. Sterker nog, de verwachting is dat AI een centrale plek zal innemen in het toekomstige werk in Europa.
    Aan de andere kant kunnen late adopters een gezamenlijk verlies van 761 miljard euro verwachten in 2018, zo blijkt uit het onderzoek.
    Een derde van de ondervraagde managers geeft aan dat hun werkgever in hun ogen niet beschikt over de kennis en kwaliteiten om de juiste digitale strategie in te voeren of zelfs geen idee heeft van wat er gedaan moet worden. 30 procent van de ondervraagden is van mening dat hun leidinggevenden te weinig investeren in nieuwe technologieën, terwijl 29 procent terughoudendheid ondervindt in het toepassen van nieuwe manieren van werken.
    De belangrijkste obstakels voor bedrijven om de overstap te maken naar digitaal zijn angst voor beveiligings-issues (24%), budgetbeperkingen (21%) en een gebrek aan talent (14%).
    Euan Davis, European Head of the Centre for the Future of Work bij Cognizant, licht toe: “Om de noodzakelijke stap te kunnen maken naar digitaal, moet het management proactief zijn en hun organisatie voorbereiden op toekomstig werk. Langzame innovatierondes en onwil om te experimenteren zijn de doodsteek voor organisaties om digitale mogelijkheden goed te kunnen benutten. Het beheren van de digitale economie is een absolute noodzaak voor organisaties. Bedrijven die geen prioriteit geven aan het verdiepen, verbreden, versterken of verbeteren van hun digitale voetafdruk, spelen bij voorbaat een verloren wedstrijd.”
    Over het onderzoek
    Uitkomsten zijn gebaseerd op een wereldwijd onderzoek onder 2.000 executives in verschillende industrieën, 250 middenmanagers verantwoordelijk voor andere werknemers, 150 MBA-studenten van grote universiteiten wereldwijd en 50 futuristen (journalisten, academici en auteurs). Het onderzoek onder executives en managers is in 18 landen uitgevoerd in het Engels, Arabisch, Frans, Duits, Japans en Chinees. Executives zijn daarbij telefonisch geïnterviewd, managers via een online vragenlijst. De MBA-studenten en futuristen zijn in het Engels ondervraagd via telefonische interviews (MBA studenten in 15 landen, futuristen in 10 landen). The Work Ahead – Europe’s Digital Imperative bevat de 800 reacties van het Europese onderzoek onder executives en managers. Meer details zijn te vinden in Work Ahead: Insights to Master the Digital Economy.
    Source: emerce.nl, 28 november 2016
  • Exploring the risks of artificial intelligence

    shutterstock 117756049“Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.”

    These words, articulated by Neil Armstrong at a speech to a joint session of Congress in 1969, fit squarely into most every decade since the turn of the century, and it seems to safe to posit that the rate of change in technology has accelerated to an exponential degree in the last two decades, especially in the areas of artificial intelligence and machine learning.

    Artificial intelligence is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation. This reaction alone is predictable, but can we really predict the associated risks involved?

    It seems we’re all trying to get a grip on potential reality, but information overload (yet another side affect that we’re struggling to deal with in our digital world) can ironically make constructing an informed opinion more challenging than ever. In the search for some semblance of truth, it can help to turn to those in the trenches.

    In my continued interview with over 30 artificial intelligence researchers, I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years.

    Some results from the survey, shown in the graphic below, included 33 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).

    Two “greatest” risks bubbled to the top of the response pool (and the majority are not in the autonomous robots’ camp, though a few do fall into this one). According to this particular set of minds, the most pressing short- and long-term risks is the financial and economic harm that may be wrought, as well as mismanagement of AI by human beings.

    Dr. Joscha Bach of the MIT Media Lab and Harvard Program for Evolutionary Dynamics summed up the larger picture this way:

    “The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.”

    Essentially, the introduction of AI may act as a catalyst that exposes and speeds up the imperfections already present in our society. Without a conscious and collaborative plan to move forward, we expose society to a range of risks, from bigger gaps in wealth distribution to negative environmental effects.

    Leaps in AI are already being made in the area of workplace automation and machine learning capabilities are quickly extending to our energy and other enterprise applications, including mobile and automotive. The next industrial revolution may be the last one that humans usher in by their own direct doing, with AI as a future collaborator and – dare we say – a potential leader.

    Some researchers believe it’s a matter of when and not if. In Dr. Nils Nilsson’s words, a professor emeritus at Stanford University, “Machines will be singing the song, ‘Anything you can do, I can do better; I can do anything better than you’.”

    In respect to the drastic changes that lie ahead for the employment market due to increasingly autonomous systems, Dr. Helgi Helgason says, “it’s more of a certainty than a risk and we should already be factoring this into education policies.”

    Talks at the World Economic Forum Annual Meeting in Switzerland this past January, where the topic of the economic disruption brought about by AI was clearly a main course, indicate that global leaders are starting to plan how to integrate these technologies and adapt our world economies accordingly – but this is a tall order with many cooks in the kitchen.

    Another commonly expressed risk over the next two decades is the general mismanagement of AI. It’s no secret that those in the business of AI have concerns, as evidenced by the $1 billion investment made by some of Silicon Valley’s top tech gurus to support OpenAI, a non-profit research group with a focus on exploring the positive human impact of AI technologies.

    “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” is the parallel message posted on OpenAI’s launch page from December 2015. How we approach the development and management of AI has far-reaching consequences, and shapes future society’s moral and ethical paradigm.

    Philippe Pasquier, an associate professor at Simon Fraser University, said “As we deploy more and give more responsibilities to artificial agents, risks of malfunction that have negative consequences are increasing,” though he likewise states that he does not believe AI poses a high risk to society on its own.

    With great responsibility comes great power, and how we monitor this power is of major concern.

    Dr. Pei Wang of Temple University sees major risk in “neglecting the limitations and restrictions of hot techniques like deep learning and reinforcement learning. It can happen in many domains.” Dr. Peter Voss, founder of SmartAction, expressed similar sentiments, stating that he most fears “ignorant humans subverting the power and intelligence of AI.”

    Thinking about the risks associated with emerging AI technology is hard work, engineering potential solutions and safeguards is harder work, and collaborating globally on implementation and monitoring of initiatives is the hardest work of all. But considering all that’s at stake, I would place all my bets on the table and argue that the effort is worth the risk many times over.

    Source: Tech Crunch

  • Hoe werkt augmented intelligence?

    artificial-intelligenceComputers en apparaten die met ons meedenken zijn al lang geen sciencefiction meer. Artificial intelligence (AI) is terug te vinden in wasmachines die hun programma aanpassen aan de hoeveelheid was en computerspellen die zich aanpassen aan het niveau van de spelers. Hoe kunnen computers mensen helpen slimmer te beslissen? Deze uitgebreide whitepaper beschrijft welke modellen in het analyseplatform HPE IDOL worden toegepast.

    Mathematische modellen zorgen voor menselijke maat

    Processors kunnen in een oogwenk een berekening uitvoeren waar mensen weken tot maanden mee bezig zouden zijn. Daarom zijn computers betere schakers dan mensen, maar slechter in poker waarin de menselijke maat een grotere rol speelt. Hoe zorgt een zoek- en analyseplatform ervoor dat er meer ‘mens’ in de analyse terechtkomt? Dat wordt gerealiseerd door gebruik te maken van verschillende mathematische modellen.

    Analyses voor tekst, geluid, beeld en gezichten

    De kunst is om uit data actiegerichte informatie te verkrijgen. Dat lukt door patroonherkenning in te zetten op verschillende datasets. Daarnaast spelen classificatie, clustering en analyse een grote rol bij het verkrijgen van de juiste inzichten. Niet alleen teksten worden geanalyseerd, steeds vaker worden ook geluidsbestanden en beelden, objecten en gezichten geanalyseerd.

    Artificial intelligence helpt de mens

    De whitepaper beschrijft uitvoerig hoe patronen worden gevonden in tekst, audio en beelden. Hoe snapt een computer dat de video die hij analyseert over een mens gaat? Hoe wordt van platte beelden een geometrisch 3d-beeld gemaakt en hoe beslist een computer wat hij ziet? Denk bijvoorbeeld aan een geautomatiseerd seintje naar de controlekamer als het te druk is op een tribune of een file ontstaat. Hoe helpen theoretische modellen computers als mensen waarnemen en onze beslissingen ondersteunen? Dat en meer leest u in de whitepaper Augmented intelligence Helping humans make smarter decisions. Zie hiervoor AnalyticsToday

    Analyticstoday.nl, 12 oktober 2016

  • Investing In Artificial Intelligence

    shutterstock Artificial intelligence is one of the most exciting and transformative opportunities of our time. From my vantage point as a venture investor at Playfair Capital, where I focus on investing and building community around AI, I see this as a great time for investors to help build companies in this space. There are three key reasons.

    First, with 40 percent of the world’s population now online, and more than 2 billion smartphones being used with increasing addiction every day (KPCB), we’re creating data assets, the raw material for AI, that describe our behaviors, interests, knowledge, connections and activities at a level of granularity that has never existed.

    Second, the costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing, making AI applications possible and affordable.

    Third, we’ve seen significant improvements recently in the design of learning systems, architectures and software infrastructure that, together, promise to further accelerate the speed of innovation. Indeed, we don’t fully appreciate what tomorrow will look and feel like.

    We also must realize that AI-driven products are already out in the wild, improving the performance of search engines, recommender systems (e.g., e-commerce, music), ad serving and financial trading (amongst others).

    Companies with the resources to invest in AI are already creating an impetus for others to follow suit — or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.

    How Might You Apply AI Technologies?

    With such a powerful and generally applicable technology, AI companies can enter the market in different ways. Here are six to consider, along with example businesses that have chosen these routes:

    • There are vast amounts of enterprise and open data available in various data silos, whether web or on-premise. Making connections between these enables a holistic view of a complex problem, from which new insights can be identified and used to make predictions (e.g., DueDil*, Premise and Enigma).
    • Leverage the domain expertise of your team and address a focused, high-value, recurring problem using a set of AI techniques that extend the shortfalls of humans (e.g., Sift Science or Ravelin* for online fraud detection).
    • Productize existing or new AI frameworks for feature engineering, hyperparameter optimization, data processing, algorithms, model training and deployment (amongst others) for a wide variety of commercial problems (e.g., H2O.ai, Seldon* and SigOpt).
    • Automate the repetitive, structured, error-prone and slow processes conducted by knowledge workers on a daily basis using contextual decision making (e.g., Gluru, x.ai and SwiftKey).
    • Endow robots and autonomous agents with the ability to sense, learn and make decisions within a physical environment (e.g., Tesla, Matternet and SkyCatch).
    • Take the long view and focus on research and development (R&D) to take risks that would otherwise be relegated to academia — but due to strict budgets, often isn’t anymore (e.g., DNN Research, DeepMind and Vicarious).

    There’s more on this discussion here. A key consideration, however, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productizing technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are proprietary data access/creation, experienced talent and addictive products.

    Which Challenges Are Faced By Operators And Closely Considered By Investors?

    I see a range of operational, commercial and financial challenges that operators and investors closely consider when working in the AI space. Here are the main points to keep top of mind:


    • How to balance the longer-term R&D route with monetization in the short term? While more libraries and frameworks are being released, there’s still significant upfront investment to be made before product performance is acceptable. Users will often be benchmarking against a result produced by a human, so that’s what you’re competing against.
    • The talent pool is shallow: few have the right blend of skills and experience. How will you source and retain talent?
    • Think about balancing engineering with product research and design early on. Working on aesthetics and experience as an afterthought is tantamount to slapping lipstick onto a pig. It’ll still be a pig.
    • Most AI systems need data to be useful. How do you bootstrap your system w/o much data in the early days?


    • AI products are still relatively new in the market. As such, buyers are likely to be non-technical (or not have enough domain knowledge to understand the guts of what you do). They might also be new buyers of the product you sell. Hence, you must closely appreciate the steps/hurdles in the sales cycle.
    • How to deliver the product? SaaS, API, open source?
    • Include chargeable consulting, set up, or support services?
    • Will you be able to use high-level learnings from client data for others?


    • Which type of investors are in the best position to appraise your business?
    • What progress is deemed investable? MVP, publications, open source community of users or recurring revenue?
    • Should you focus on core product development or work closely on bespoke projects with clients along the way?
    • Consider buffers when raising capital to ensure that you’re not going out to market again before you’ve reached a significant milestone. 

    Build With The User In The Loop

    There are two big factors that make involving the user in an AI-driven product paramount. One, machines don’t yet recapitulate human cognition. To pick up where software falls short, we need to call on the user for help. And two, buyers/users of software products have more choice today than ever. As such, they’re often fickle (the average 90-day retention for apps is 35 percent).

    Returning expected value out of the box is key to building habits (hyperparameter optimization can help). Here are some great examples of products that prove that involving the user in the loop improves performance:

    • Search: Google uses autocomplete as a way of understanding and disambiguating language/query intent.
    • Vision: Google Translate or Mapillary traffic sign detection enable the user to correct results.
    • Translation: Unbabel community translators perfect machine transcripts.
    • Email Spam Filters: Google, again, to the rescue.

    We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer-term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.

    What’s The AI Investment Climate Like These Days?

    To put this discussion into context, let’s first look at the global VC market: Q1-Q3 2015 saw $47.2 billion invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA).

    We’re likely to breach $55 billion by year’s end. There are roughly 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well-respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.

    So far, we’ve seen about 300 deals into AI companies (defined as businesses whose description includes such keywords as artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning) from January 1, 2015 through December 1, 2015 (CB Insights).

    In the U.K., companies like Ravelin*, Signal and Gluru* raised seed rounds. approximately $2 billion was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339 million debt+credit), ZestFinance ($150 million debt), LiftForward ($250 million credit) and Argon Credit ($75 million credit). Importantly, 80 percent of deals were < $5 million in size, and 90 percent of the cash was invested into U.S. companies versus 13 percent in Europe. Seventy-five percent of rounds were in the U.S.

     The exit market has seen 33 M&A transactions and 1 IPO. Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532 million; $17 million raised), Elastica/Blue Coat Systems ($280 million; $45 million raised) and SupersonicAds/IronSource ($150 million; $21 million raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7 people.

    Altogether, AI investments will have accounted for roughly 5 percent of total VC investments for 2015. That’s higher than the 2 percent claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software.

    The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the U.S. Businesses must therefore have exposure to this market.

    Which Problems Remain To Be Solved?


    I spent a number of summers in university and three years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is very challenging, expensive, lengthy and regulated, and ultimately offers a transient solution to treating disease.

    Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real time, driving down cost of care over a patient’s lifetime while consequently improving outcomes.

    Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online, and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by third-parties). Sure, the news might paint a different story, but the fact is that we’re still using the web and its wealth of products.

    On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge.

    Look at today’s clinical model. A patient presents into the hospital when they feel something is wrong. The doctor must conduct a battery of tests to derive a diagnosis. These tests address a single (often late-stage) time point, at which moment little can be done to reverse damage (e.g., in the case of cancer).

    Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There are loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions...

    Some companies are already hacking away at this problem:

    • Sano: Continuously monitor biomarkers in blood using sensors and software.
    • Enlitic/MetaMind/Zebra Medical: Vision systems for decision support (MRI/CT).
    • Deep Genomics/Atomwise: Learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
    • Flatiron Health: Common technology infrastructure for clinics and hospitals to process oncology data generated from research.
    • Google: Filed a patent covering an invention for drawing blood without a needle. This is a small step toward wearable sampling devices.
    • A point worth noting is that the U.K. has a slight leg up on the data access front. Initiatives like the U.K. Biobank (500,000 patient records), Genomics England (100,000 genomes sequenced), HipSci (stem cells) and the NHS care.data program are leading the way in creating centralized data repositories for public health and therapeutic research.

    Enterprise Automation

    Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9 trillion by 2020 (BAML). Coupled with the efficiency gains worth $1.9 trillion driven by robots, I reckon there’s a chance for near-complete automation of core, repetitive businesses functions in the future.

    Think of all the productized SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making.

    Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfillment and shipping. Of course, this is probably a ways off.

    I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long-term innovation, especially considering that far less is occurring within universities. VC was born to fund moonshots.

    We must remember that access to technology will, over time, become commoditized. It’s therefore key to understand your use case, your user, the value you bring and how it’s experienced and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering.

    Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g., the user experience layer ). As such, there’s renewed focus on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.

    Finally, you must have exposure to the U.S. market, where the lion’s share of value is created and realized. We have an opportunity to catalyze the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond.

    Source: TechCrunch

  • Organizing Big Data by means of using AI

    Artificial IntelligenceNo matter what your professional goals are, the road to success is paved with small gestures. Often framed via KPIs – key performance indicators, these transitional steps form the core categories contextualizing business data. But what 

    data matters?

    In the age of big data, businesses are producing larger amounts of information than ever before and there needs to be efficient ways to categorize and interpret that data. That’s where AI comes in.

    Building Data Categories

    One of the longstanding challenges with KPI development is that there are countless divisions any given business can use. Some focus on website traffic while others are concerned with social media engagement, but the most important thing is to focus on real actions and not vanity measures. Even if it’s just the first step toward a sale, your KPIs should reflect value for your bottom line.


    Small But Powerful

    KPIs typically cover a variety of similar actions – all Facebook behaviors or all inbound traffic, for example. The alternative, though, is to break down KPI-type behaviors into something known as micro conversions. 

    Micro conversions are simple behaviors that signal movement toward an ultimate goal like completing a sale, but carefully gathering data from micro conversions and tracking them can also help identify friction points and other barriers to conversion. This is especially true any time your business undergoes a redesign or institutes a new strategy. Comparing micro data points from the different phases, then, is a high value means of assessment.

    AI Interpretation

    Without AI, this micro data would be burdensome to manage – there’s just so much of it –but AI tools are both able to collect data and interpret it for application, particularly within comparative frameworks. All AI needs is well-developed KPIs.

    Business KPIs direct AI data collection, allow the system to identify shortfalls, and highlight performance goals that are being met, but it’s important to remember that AI tools can’t fix broader strategic or design problems. With the rise of machine learning, some businesses have come to believe that AI can solve any problem, but what it really does it clarify the data at every level, allowing your business to jump into action.

    Micro Mapping

    Perhaps the easiest way to describe what AI does in the age of big data is with a comparison. Your business is a continent and AI is the cartographer that offers you a map of everything within your business’s boundaries. Every topographical detail and landmark is noted. But the cartographer isn’t planning a trip or analyzing the political situation of your country. That’s up to someone else. In your business, that translates to the marketing department, your UI/UX experts, or C-suite executives. They solve problems by drawing on the map.

    Unprocessed big data is overwhelming – think millions of grains of sand that don’t mean anything on their own. AI processes that data into something useful, something with strategic value. Depending on your KPI, AI can even draw a path through the data, highlighting common routes from entry to conversion, where customers get lost – what you might consider friction points, and where they engage. When you begin to see data in this way, it becomes clear that it’s a world unto itself and one that has been fundamentally incomprehensible to users. 

    Even older CRM and analytics programs fall short when it comes to seeing the big picture and that’s why data management has changed so much in recent years. Suddenly, we have the technology to identify more than click-through-rates or page likes. AI fueled by big data is a new organization era with an emphasis on action. If you’re willing to follow the data, AI will draw you the map


    Author: Lary Alton

    Source: Information Management

  • Routinebanen worden opgeslokt door robots en artificial intelligence

    Robots en artificial intelligence zijn anno 2016 al ver genoeg ontwikkeld om een relatief groot deel van het fysieke voorspelbare werk en dataverwerkingstaken van mensen over te nemen. Bovendien zal technologische vooruitgang ervoor zorgen dat steeds meer taken van mensen worden overgenomen, wat ofwel leidt tot meer tijd voor andere taken, of een vermindering van het aantal menselijke werknemers.

    Automatisering en robotisering bieden de mensheid de mogelijkheid om zich te bevrijden van repetitief, fysiek werk, dat vaak als onplezierig of saai wordt ervaren. Hoewel het verdwijnen van dit werk zal zorgen voor positieve effecten op aspecten als gezondheid en werkkwaliteit, heeft de ontwikkeling ook negatieve effecten op de werkgelegenheid – zeker in banen waarvoor weinig vaardigheden gevraagd worden. De afgelopen jaren is er veel gesproken over de omvang van de bedreiging die robots vormen voor de banen van menselijke werknemers en een recent onderzoek van McKinsey & Company gooit nog meer olie op het vuur. Volgens schattingen van het Amerikaanse consultancykantoor zal op korte termijn tot wel 51% van al het werk in de Verenigde Staten zwaar worden getroffen door robotisering en AI-technologie. 

    Analyzing work activities

    Het onderzoek, dat is gebaseerd op een analyse van meer dan 2.000 werk-gerelateerde activiteiten in de VS in meer dan 800 arbeidsfuncties, suggereert dat voorspelbaar fysiek werk in relatief stabiele omgevingen de grootste kans loopt om te worden overgenomen door robots of een andere vorm van automatisering. Voorbeelden van dit soort omgevingen zijn onder meer de accommodatie en horecabranche, de maakindustrie en de retailsector. Vooral in de maakindustrie zijn de mogelijkheden voor robotisering groot – ongeveer een derde van al het werk in de sector kan als voorspelbaar worden beschouwd. Kijkend naar de huidige automatiseringstechnologie zou tot wel 78% van dit werk kunnen worden geautomatiseerd.

    Maar het is echter niet alleen simpel productiewerk dat kan worden geautomatiseerd, aangezien ook werk op het gebied van dataverwerking en dataverzameling met de huidige technologie al kan worden gerobotiseerd. Volgens berekeningen van McKinsey kan tot wel 47% van de taken van een retail salesmedewerker op dit gebied worden geautomatiseerd – al ligt dit nog altijd veel lager dan de 86% automatiseringspotentie in het data-gerelateerde werk van boekhouders, accountants en auditors. 

    Automation is technically feasible

    In het onderzoek werd ook in kaart gebracht welke functies de meeste potentie voor automatisering hebben. Onderwijsdiensten en management lijken, kijkend naar de huidige technologie, de vakgebieden die het minst getroffen zullen worden door robotisering en AI-technologie. Vooral in het onderwijs zijn de percentages automatiseerbare taken laag, met weinig dataverzameling, -verwerking en voorspelbaar fysiek werk. Managers kunnen wel enige automatisering verwachten in hun werk, vooral op het gebied van dataverwerking en verzameling. In de bouw en landbouwsector is er sprake van veel werk dat als onvoorspelbaar kan worden beschouwd. De onvoorspelbare aard van deze werkzaamheden beschermt arbeiders in deze segmenten, omdat deze taken minder eenvoudig te automatiseren zijn.

    McKinsey benadrukt dat de analyse zich richt op het vermogen van de huidige technologieën om taken van mensen over te nemen. Dat dit technologisch mogelijk is, betekent volgens het consultancybureau niet dat deze werkzaamheden ook daadwerkelijk zullen worden overgenomen door robots of intelligente technologie. In het onderzoek wordt namelijk geen rekening gehouden met de implementatiekosten van deze technologie, of naar de grenzen van automatisering. Daardoor zullen werknemers in bepaalde gevallen goedkoper en beter beschikbaar blijven dan een gerobotiseerd systeem.

    Met het oog op de toekomst, voorspellen de onderzoekers dat met de komst van nieuwe technologieën op het gebied van robotisering en kunstmatige intelligentie er ook meer taken geautomatiseerd kunnen worden. Vooral technologie die het mogelijk maakt om natuurlijke gesprekken te voeren met robots, waarbij de machines menselijke taal kunnen begrijpen en automatisch kunnen antwoorden, zal volgens de onderzoekers een grote impact hebben op de mogelijkheden voor verdere robotisering.

    Bron: Consultancy.nl, 3 oktober 2016


  • Should we fear Artificial Intelligence

    should we fear AIIf you watch films and TV shows, in which AI has been exploited to create any number of apocalyptic scenarios, the answer might be yes. After watching Blade Runner or The Matrix or, as a more recent example, Ex Machina, it’s easier to understand why AI touches off visceral reactions in the layman.

    It’s no secret that automation has posed a real threat to lower-skilled workers in blue collar industries, and that has grown into a fear of all forms of artificial intelligence. But a lot of complexities stand between where we are today and production AI, particularly the struggle to bridge the AI chasm. In other words, the type of AI Hollywood suggests we should fear, taking our jobs and possibly more, is a long way off.

    At the other end of the pop culture spectrum, we have people who have embraced AI as the future of mankind. Google’s chief futurist Ray Kurzweil is a great example of thinkers who have championed AI as the next step in the evolution of human intelligence. So which version is our AI future?

    The truth is likely somewhere in the middle. Artificial intelligence won’t compete against humans with extinction-level stakes à la Terminator, at least in forthcoming years; nor will it transcend us as Kurzweil suggests. The likeliest outcome in the near future is we carve out symbiotic roles for the two, because of their respective shortcomings.

    While many people expect all AI they interact with to pass the Turing test, the human brain is the most advanced machine we know of. Thanks to emotional intelligence, humans can interpret and adapt in real time to changing circumstances, and react differently to the same stimuli. Humans and their emotional intelligence make it tough for AI to be benchmarked.

    We are all talking about Amazon Go, Amazon’s attempt to bring its website to life in fully automated 3D retail centers. But who will customers talk to when an item is missing or a mistake is made in billing? We want human interactions, like a conversation with the neighborhood baker (if you’re French like me) or the opinion of a salesperson on the fit of a jacket. Now we also want efficiency, but not to the exclusion of adaptable and sympathetic emotional intelligence. 

    In some situations, efficiency and safety are preferred over empathy, or creativity. For instance, many favor of the delegation of hazardous tasks in factories or oilfields to machines, letting humans handle higher level strategic tasks like managing employees or drawing on both the left and right brain to flesh out designs.

    The world is becoming a more complex place and we can welcome more AI to help us navigate it. Consider the accelerating advance of research in many scientific fields, making staying an expert even in a well-defined field a real challenge. The issue is not just that your field is growing, but that it touches on and draws from many other fields that are growing as well. As a result, knowledge bases are growing exponentially.

    A heart surgeon faced with a tough choice may consult a few books or a couple of experts and then identify patterns and weight different outcomes to make a decision. Instead, they could draw on an AI to assimilate the knowledge base to reach a logical decision from a truly holistic standpoint. This does not guarantee that it will be the right answer. Machine Learning can help the surgeon weigh thousands of similar cases, consider every medical angle, and even cross-reference the patient’s family history. The surgeon could even cover all this ground in less time than it would have taken to page through books or call advisors. But the purely logical decision should not be the right and final decision. Doing the right thing is different that having highest probability of success, and so the surgeon will have to consider empathy for the family, the quality of living of the patient, and many other emotional factors.

    For now, machine learning is the most straightforward AI component to implement, and the one critical to improving the human condition. ML limits AI outputs to assimilating large quantities of data and defining patterns, but it acknowledges that AI cannot evaluate complex, novel, or emotional variables and leaves multidimensional decision making to humans. 

    As researchers and futurists struggle to bring true AI to the masses, it will be a progressive transition. What I am interested to see is whether or not a rapid transition could trigger a generational clash.

    Just like pre-Internet and post-Internal generations, will be see a pre-AI and post-AI ones? If that’s the case, as with many technologies, the last generation to fear it may raise the first generation to embrace it.

    Author: Isabelle Guis 

  • Successfully implementing AI into practice

    Artificial Intelligence (AI) can be a real value driver for organizations. As the power of algorithms, computing and amounts of data surge, companies within manufacturing and industry start to see an increasing amount of use cases. These systems could drive efficiency and enhance capability. But also automatize tasks, decrease costs and improve revenue.

    Success and value generated by AI benefit from a good understanding and expectation of what the technology can deliver from the C-suite down. Organizations in general should also have a well-considered implementation process. This concludes IBM in the recently published white paper on AI. ‘Beyond the hype: A guide to understanding and successfully implementing artificial intelligence within your business’.

    Putting AI into practice: specific tasks

    AI is not about sentient robots and magic boxes. AI is a science and a set of computational technologies. These are inspired by the ways people use their nervous systems and their bodies to sense, learn, reason and take action. But typically operate quite differently. AI encompasses machine learning (machines that can learn from data – algorithms adjusting themselves) and deep learning (a combination of algorithms that are mutually linked).

    Within AI data scientists extract knowledge and interpret data by using the right tools and statistical methods. The machines learn to recognize patterns in the data that it is fed to them. And map these patterns to future outcomes.


    Relevant AI use cases span various areas across virtually every industry. But there are three main macro domains that continue to drive the adoption as well as the most economies across businesses. Cognitive engagement involves how to deliver new ways for humans to engage with machines. Cognitive insights and knowledge addresses how to augment humans who are overwhelmed with information and knowledge. And cognitive automation relates to move from process automation to mimicking human intelligence, to facilitate complex and knowledge-intense business decisions.

    Below are some examples of successful implementations within the industrial and manufacturing domain:

    • Using the many different available sensor measurements from large truck engines, a neural network at a manufacturer is trained to recognize normal and abnormal engine behavior. The model is able to detect when specific measurements were out of the ordinary. Anomalous sensor readings are highly predictive of pending engine failures.
    • At a car manufacturer through supervised learning techniques predictive models were developed that could provide an early warning of failure based on the different system messages and sensor readings that continuously stream from the production line. This early warning could be used to prioritize maintenance and reduce both downtime as well as false positives and needless efforts.
    • The output of machine learning-based predictive models with prescriptive, mathematical optimization models was used at a utility company to prescribe the optimal mix of power production sources to meet predicted demand and to minimize costs. This required both the prediction of demand as well as prediction of available solar and wind energy capacity.
    • To understand the business dynamics and create inventory of possibly relevant data sources a material producer used machine learning models to learn the price behavior and forecast future price development. The models also enabled buyers to evaluate their own ‘what if’ scenarios. This all came together for the user in an interactive dashboard.

    There are three main steps to implement AI:

    1. -Develop an AI strategy and roadmap
    2. -Establish AI capabilities and skills
    3. -Start small and scale quickly

    In the previously mentioned white paper IBM provides some practical recommendations to avoid frequent pitfalls such as cultural or managerial resistance, bad or insufficient data, too high or low expectations, lack of capabilities et cetera.

    Based on its experience and knowledge, IBM can help to successfully implement AI and guide organizations in the transformation to Industry 4.0. IBM enables companies to experiment with big ideas, acquire new expertise and build new enterprise-grade solutions for immediate market impact. It gives companies the speed of a start-up, at the scale and rigor of an enterprise.

    Author: Marloes Roelands

    Source: IBM

  • The big data race reaches the City

    coloured-high-end-data-cables-large transEduPGWXTgvtbFyMaMlYatm4ovIMMP 5WSTNAIgCzTy4

    Vast amounts of information are being sifted for the good of commercial interests as never before

    IBM’s Watson supercomputer, once known for winning the television quiz show Jeopardy! in 2011, is now sold to wealth management companies as an affordable way to dispense investment advice. Twitter has introduced “cashtags” to its stream of social chatter so that investors can track what is said about stocks. Hedge funds are sending up satellites to monitor crop yields before even the farmers know how they’re doing.

    The world is awash with information as never before. According to IBM, 90pc of all existing data was created in the past two years. Once the preserve of academics and the geekiest hedge fund managers, the ability to harness huge amounts of noise and turn it into trading signals is now reaching the core of the financial industry.

    Last year was one of the toughest since the financial crisis for asset managers, according to BCG partner Ben Sheridan, yet they have continued to spend on data management in the hope of finding an edge in subdued markets.

    “It’s to bring new data assets to bear on some of the questions that asset managers have always asked, like macroeconomic movements,” he said.

    “Historically, these quantitative data aspects have been the domain of a small sector of hedge funds. Now it’s going to a much more mainstream side of asset managers.”

    59823675 The headquarters of HSBC Holdings Plc left No 1 Canada Square or Canary Wharf Tower cen-large transgsaO8O78rhmZrDxTlQBjdEbgHFEZVI1Pljic pW9c90 
    Banks are among the biggest investors in big data

    Even Goldman Sachs has entered the race for data, leading a $15m investment round in Kensho, which stockpiles data around major world events and lets clients apply the lessons it learns to new situations. Say there’s a hurricane striking the Gulf of Mexico: Kensho might have ideas on what this means for US jobs data six months afterwards, and how that affects the S&P stock index.

    Many businesses are using computing firepower to supercharge old techniques. Hedge funds such as Winton Capital already collate obscure data sets such as wheat prices going back nearly 1,000 years, in the hope of finding patterns that will inform the future value of commodities.

    Others are paying companies such as Planet Labs to monitor crops via satellite almost in real time, offering a hint of the yields to come. Spotting traffic jams outside Wal-Marts can help traders looking to bet on the success of Black Friday sales each year – and it’s easier to do this from space than sending analysts to car parks.

    Some funds, including Eagle Alpha, have been feeding transcripts of calls with company executives into a natural language processor – an area of artificial intelligence that the Turing test foresaw – to figure out if they have gained or lost confidence in their business. Trades might have had gut feelings about this before, but now they can get graphs.

    biggest spenders

    There is inevitably a lot of noise among these potential trading signals, which experts are trying to weed out.

    “Most of the breakthroughs in machine-learning aren’t in finance. The signal-to-noise ratio is a problem compared to something like recognising dogs in a photograph,” said Dr Anthony Ledford, chief scientist for the computer-driven hedge fund Man AHL.

    “There is no golden indicator of what’s going to happen tomorrow. What we’re doing is trying to harness a very small edge and doing it over a long period in a large number of markets.”

    The statistics expert said the plunging cost of computer power and data storage, crossed with a “quite extraordinary” proliferation of recorded data, have helped breathe life into concepts like artificial intelligence for big investors.

    “The trading phase at the moment is making better use of the signals we already know about. But the next research stage is, can we use machine learning to identify new features?”

    AHL’s systematic funds comb through 2bn price updates on their busiest days, up from 800m during last year’s peak.

    Developments in disciplines such as engineering and computer science have contributed to the field, according to the former academic based in Oxford, where Man Group this week jointly sponsored a new research professorship in machine learning at the university.

    google-driverless 3147440b 1-large transpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa RXJU8
    The artificial intelligence used in driverless cars could have applications in finance

    Dr Ledford said the technology has applications in driverless cars, which must learn how to drive in novel conditions, and identifying stars from telescope images. Indeed, he has adapted the methods used in the Zooniverse project, which asked thousands of volunteers to help teach a computer to spot supernovae, to build a new way of spotting useful trends in the City’s daily avalanche of analyst research.

    “The core use is being able to extract patterns from data without specifically telling the algorithms what patterns we are looking for. Previously, you would define the shape of the model and apply it to the data,” he said.

    These technologies are not just been put to work in the financial markets. Several law firms are using natural language processing to carry out some of the drudgery, including poring over repetitive contracts.

    Slaughter & May has recently adopted Luminance, a due diligence programme that is backed by Mike Lynch, former boss of the computing group Autonomy.

    Freshfields has spent a year teaching a customised system known as Kira to understand the nuances of contract terms that often occur in its business.

    Its lawyers have fed the computer documents they are reading, highlighting the parts they think are crucial. Kira can now parse a contract and find the relevant paragraphs between 40pc and 70pc faster than a human lawyer reviewing it by hand.

    “It kicks out strange things sometimes, irrelevancies that lawyers then need to clean up. We’re used to seeing perfect results, so we’ve had to teach people that you can’t just set the machine running and leave it alone,” said Isabel Parker, head of innovations at the firm.

    “I don’t think it will ever be a standalone product. It’s a tool to be used to enhance our productivity, rather than replace individuals.”

    The system is built to learn any Latin script, and Freshfields’ lawyers are now teaching it to work on other languages. “I think our lawyers are becoming more and more used to it as they understand its possibilities,” she added.

    Insurers are also spending heavily on big data fed by new products such as telematics, which track a customer’s driving style in minute detail, to help give a fair price to each customer. “The main driver of this is the customer experience,” said Darren Price, group chief information officer at RSA.

    The insurer is keeping its technology work largely in-house, unlike rival Aviva, which has made much of its partnerships with start-up companies in its “digital garage”. Allianz recently acquired the robo-adviser Moneyfarm, and Axa’s venture fund has invested in a chat-robot named Gasolead.

    EY, the professional services firm, is also investing in analytics tools that can flag red flags for its clients in particular countries or businesses, enabling managers to react before an accounting problem spreads.

    Even the Financial Conduct Authority is getting in on the act. Having given its blessing to the insurance sector’s use of big data, it is also experimenting with a “sandbox”, or a digital safe space where their tech experts and outside start-ups can use real-life data to play with new ideas.

    The advances that catch on throughout the financial world could create a more efficient industry – and with that tends to come job cuts. The Bank of England warned a year ago that as many as 15m UK jobs were at risk from smart machines, with sales staff and accountants especially vulnerable.

    “Financial services are playing catch-up compared to some of the retail-focused businesses. They are having to do so rapidly, partly due to client demand but also because there are new challengers and disruptors in the industry,” said Amanda Foster, head of financial services at the recruiter Russell Reynolds Associates.

    But City firms, for all their cost pressures, are not ready to replace their fund managers with robots, she said. “There’s still the art of making an investment decision, but it’s about using analytics and data to inform those decisions.”

    Source: Telegraph.co.uk, October 8, 2016



  • The three key challenges that could derail your artificaiI intelligence project

    BrainChip650It’s been abundantly clear for a while that in 2017, artificial intelligence (AI) is going to be front and center of vendor marketing as well as enterprise interest. Not that AI is new – it’s been around for decades as a computer science discipline. What’s different now is that advances in technology have made it possible for companies ranging from search engine providers to camera and smartphone manufacturers to deliver AI-enabled products and services, many of which have become an integral part of many people’s daily lives. More than that, those same AI techniques and building blocks are increasingly available for enterprises to leverage in their own products and services without needing to bring on board AI experts, a breed that’s rare and expensive.

    Sentient systems capable of true cognition remain a dream for the future.  But AI today can help organizations transform everything from operations to the customer experience. The winners will be those who not only understand the true potential of AI but are also keenly aware of what’s needed to deploy a performant AI-based system that minimizes rather than creates risk and doesn’t result in unflattering headlines.

    These are the three key challenges all AI projects must tackle:

    • Underestimating the time and effort it takes to get an AI-powered system up and running. Even if the components are available out of the box, systems still need to be trained and fine-tuned. Depending on the exact use case and requirements for accuracy, it can be anything between a few hours and a couple of years to have a new system up and running. That’s assuming you have a well-curated data set available; if you don’t, that’s another challenge.
    • AI systems are only as good as the people that program them and the data they feed them. It's also people who decide to what degree to rely on the AI system and when to apply human expertise. Ignoring this principle will have unintended, likely negative consequences and could even be the determinant between life and death. These are not idle warnings: We’ve already seen a number of well-publicized cases where training bias ended up discriminating against entire population groups, or image recognition software turned out to be racist; and yes, lives have already been put at risk by badly trained AI programs. Lastly, there’s the law of unintended consequences: people developing AI systems tend to focus on how they want the system to work, but not how somebody with criminal or mischievous intent could subvert it.
    • Ignore legal, regulatory and ethical implications at your peril. For example, you're at risk of breaking the law if the models you run take into consideration factors that mustn't be used as the basis for certain decisions (e.g., race, sex). Or you could find yourself with a compliance breach if you’re under obligation to provide an exact audit trail of how a decision was arrived at, but where neither the software nor its developers can explain how the result came about. A lot of grey areas surround the use of predictions when making decisions about individuals; these require executive level discussions and decisions, as does the thorny issue of dual-use.

    Source: forrestor.com, January 9, 2017

  • Top 10 big data predictions for 2019

    The amount of data that created nowadays is incredible. The amount and importance of data is ever growing, and with that the need for analyzing and identifying patterns and trends in data becomes critical for businesses. Therefore, the need for big data analytics is higher than ever. That raises questions about the future of big data. ‘In which direction will the big data industry evolve?’ 'What are the dominant trends for big data in the future?' While there are several predictions doing the rounds, these are the top 10 big data predictions that will most likely dominate the (near) future of the big data industry:

    1. An increased demand for data scientists

    It is clear that with the growth of data, the demand for people capable of managing big data is also growing. Demand for data scientists, analysts and data management experts is on the rise. The gap between the demand and availability of people who are skilled in analyzing big data trends is big and keeps getting bigger. It is up to you to decide if you wish to hire offshore data scientists/data managersor hire an in-house team for your business.

    2. Businesses will prefer algorithms over software

    Businesses prefer purchasing existing algorithms over creating their own. It gives them more customization options compared to a situation where they buy software. Software cannot be modified as per user requirements, rather businesses have to adjust as per the software.

    3. Businesses increase investments in big data

    IDC analysts predict that the investment in big data and analytics will reach $187 billion in 2019. Even though the big data investment from one industry to the other will vary, spending as a whole will increase. It is predicted that the manufacturing industry will experience the highest investment in big data, followed by healthcare and the financial industry.

    4. Data security and privacy will be a growing concern

    Data security and privacy have been the biggest challenges in the big data and internet of things (IoT) industries. Since the volume of data started increasing exponentially, the privacy and security of data have become more complex and the need to maintain high-security standards is becoming extremely important. If there is something that will impede the growth of big data, it is data security and privacyconcerns.

    5. Machine learning will be of more importance for big data

    Machine learning will be of paramount importance regarding big data. One of the most important reasons why machine learning will be important for big data is that it can be of huge help in predictive analysis and addressing future challenges.

    6. The rise of predictive analytics

    Simply put, predictive analytics can predict the future more reliably with the help of big data analytics. It is a highly sophisticated and effective way to gather market and customer information to determine the next actions of both consumer and businesses. Analytics provide depth in the understanding of futuristic behaviour.

    7. Chief Data Officers will have a more important role

    As big data becomes important, the role of Chief Data Officers will increase. Chief Data Officers will be able to direct functional departments with the power of deeply analysed data and in-depth studies of trends.

    8. Artificial Intelligence will become more accessible

    Without going in detail about how Artificial Intelligence becomes significantly important for every industry, it is safe to say that big data is a major enabler of AI. Processing large amounts of data to derive trends for AI and machine learning is possible. With cloud-based data storage infrastructure, parallel processing of big data is possible. Big data will make AI more productive and more efficient.

    9. A surge in IoT networks

    Smart devices are dominating our lives like never before. There will be an increase in the use of IoT by businesses and that will only increase the amount of data that is being generated. In fact, the focus will be on introducing new devices that are capable of collecting and processing data as quickly as possible.

    10. Chatbots will get smarter

    Needless to say, chatbots come across a large part of daily online interaction. But chatbots are turning more and more intelligent and capable of personalized interactions. With the rise of AI, big data will enable tons of data to be processed and conversations can be analysed to draw a more streamlined strategy that is more customer-focused for chatbots to be smarter.

    Is your business ready for the future of big data analytics? Keep the above predictions in mind when preparing your business for emerging technologies and think about how big data can play a role.

    Source: Datafloq

  • United Nations CITO: Artificial intelligence will be humanity's final innovation

    uncybercrime2012genevaThe United Nations Chief Information Technology Officer spoke with TechRepublic about the future of cybersecurity, social media, and how to fix the internet and build global technology for social good.

    Artificial intelligence, said United Nations chief information technology officer Atefeh Riazi, might be the last innovation humans create.

    "The next innovations," said the cabinet-level diplomat during a recent interview at her office at UN headquarters in New York, "will come through artificial intelligence."

    From then on, said Riazi, "it will be the AI innovating. We need to think about our role as technologists and we need to think about the ramifications—positive and negative—and we need to transform ourselves as innovators."

    Appointed by Secretary General Ban Ki-moon as CITO and Assistant Secretary-General of the Office of Information and Communications Technology in 2013, Riazi is also an innovator in her own right in the global security community.

    Riazi was born in Iran, and is a veteran of the information technology industry. She has a degree in electrical engineering from Stony Brook University in New York, spent over 20 years working in IT roles in the public and private sectors, and was the New York City Housing Authority's Chief Information Officer from 2009 to 2013. She has also served as the executive director of CIOs Without Borders, a non-profit organization dedicated to using technology for the good of society—especially to support healthcare projects in the developing world.

    Riazi and her UN staff meet with diplomats and world leaders, NGOs, and executives at private companies like Google and Facebook to craft technology policy that impacts governments and businesses around the world.

    TechRepublic's in-depth interview with her covered a broad range of important technology policy issues, including the digital divide, e-waste, cybersecurity, social media, and, of course, artificial intelligence.

    The Digital Divide

    TechRepublic: Access to information is essential in modern life. Can you explain how running IT for the New York City Housing Authority helps low income people?

    UN CITO: When I was at New York City Housing, I came in as a CIO. The chairman had been a CIO and within six months most of the leadership left. He looked at me. I looked at him. The board looked at me. I knew to be nervous, and they said, "you're in. You're the next acting general manager of New York City Housing." I said, "Okay."

    New York City Housing is a $3 billion organization providing support to about 500,000 residents. You have the Section 8 program, you have the public housing, and a billion and a half of construction. I came out of IT and I had to help manage and run New York City Housing at a very difficult time.

    When you look at the city of New York, the digital divide among the youth and among the poor is very high. We have a digital divide right in this great city. Today I have two eight year olds and their homework. A lot of [their] research is done online. But in other areas of the city, you have kids that don't have access to computers, don't have access to the internet, cannot afford it. They can't find jobs because they don't have access to the internet. They can't do as well in school. A lot of them are single family, maybe grandparents raising them.

    How do we provide them that access? How do we close the gap so they can compete with other classmates who have access to knowledge and information?

    In Finland, they passed a law stating that internet access is a birthright. If it's a birthright, then let's give it to people right here in New York and elsewhere in the world.

    All of the simple things that we have and we offer our children, if we could [provide internet access] as a public service, we begin to close the income gap, help people learn skills, and make them more viable for jobs.


    TechRepublic: Can you help us understand the role of electronic waste (e-waste) on women and girls in developing countries?

    UN CITO: E-waste is the mercury and lead. Mercury and lead contributes to 5% of global waste. They contribute to 70% of hazardous materials. You have computers, servers, storage, and cell phones. We have no plans on recycling these. This is polluting the air and the water in China and India. Dioxin, if you burn electronics you get dioxin, which is like agent orange. The question to the tech sector is, okay, you created this wonderful world of technology, but you have no plans in addressing these big issues of environmental hazard.

    The impact of electronic waste is tremendous because women's body looks at mercury as calcium. It brings it in, it puts it in the bones and then when you're pregnant, guess what? It thinks, oh, "I got some calcium. Here it is."

    Newborns have mercury and lead in their blood, and disease. It's just contributing to so many children, so many women getting sick and because women pass it on to the next generation, [children] are impacted.

    Where is the responsibility of the tech sector to say, "I will protect the women. I will protect the children. I will take out the lead and mercury. I will help contribute to recycling of my materials."

    The Deep Web

    TechRepublic: While there are many privacy benefits to the Deep Web, it's no secret that criminal activity flourishes on underground sites. I know this is the perpetual question, but is this criminal behavior that has always existed and now we can see it a little better, or does the Deep Web perpetuate and increase criminal behavior?

    UN CITO: I wish I had enough insight to answer correctly, but I can give it from my perspective. The scope has changed tremendously. If you look at slavery and the number of people trafficked, there's 200 million people trafficked now. You look at the numbers and you look at how much the slaves were sold [in the past]. I think the slaves were sold for [hundreds] of... today's dollars. Today, you can buy a girl for $300 through the Deep Web.

    Here's the thing. To the child trafficking, human trafficking has exploded because we're a global world. We can sell and buy globally. Before, the criminals couldn't do it globally. They couldn't move the people as fast.

    TechRepublic: If we're putting this in very cynical market terms, the market for humans has grown due to the Deep Web?

    UN CITO: Yes. The market has grown for sex trafficking, or for organs, or for just basic labor. There are many reasons where this has happened. We're seeing tremendous growth in criminal activity. It's very difficult to find criminals. Drug trafficking is easier. Commerce is easier in the Deep Web. All of that is going up.

    Humans and 99% are good but you've got the 1%, and I think we have a plan to react to the criminal activities. At the UN we are beginning to build the cyber-expertise to become a catalyst. Not to resolve these issues, because I look at the internet as an infant that we have created, this species we've created which is growing and it's evolving. It's going through "terrible twos" right now. We have a choice to try to manage it, censor it, or shut it down, which we see in some countries. Or we have a choice to build its antibody. Make sure that it becomes strong.

    We [can] create the "Light Web," and I think we can only do it through the use of all the amazing technology people globally want to [use to] do good. As a social group, we can create positive algorithms for social good.

    Encryption and cybersecurity

    TechRepublic: In the digital world, the notion of sovereignty is shifting. What is the UN's role in terms of cybersecurity?

    UN CITO: It's shifting, exactly, because government rule over a civil society in a cyber-world doesn't exist. Do you think that criminals care that the UN or governments have a policy, or a rule? Countries and criminals will begin to attack each other.

    From our perspective, our mission is really peace and security, development of human rights. The UN has a number of responsibilities. We have peacekeeping, human rights, development, and sustainable development. We look at cybersecurity, and we say that peace in the cyber-world is very different because countries are starting to attack each other, and starting to attack each [other's] industrial systems. Often attacks are asymmetrical. Peace to me is very different than peace to you.

    We talk about cybersecurity. Okay, then what do we do? This is the world we've created through the internet. What do we do to bring peace to this world? What does anyone do?

    I think that we spend a lot of money on cybersecurity globally. Public and private money, and we are not successful, really. Intrusions happen every day. Intellectual property is lost. Privacy, the way we knew it, has changed completely. There's a new way of thinking about privacy, and what's confidential.

    We worry about industrial systems like our electric grid. We worry about our member states' industrial systems, intrusions into electricity, into water, and sanitation—things that impact human life.

    Our peacekeepers are out in the field. We have helicopters. We have planes. A big worry of ours is an intrusion into a plane or helicopter, where you think the fuel gauge is full but it's empty. Or through a GPS. If your GPS is impacted, and you think you're here but you're actually there.

    Where is the role of encryption? Encryption is amoral. It could be used for good. It could be used for bad. It's hard to have an opinion on encryption, for me at least, without realizing that the same thing I endorse for everyone, others endorse for criminals. Do we have the sophistication, the capabilities to limit that technology only for the good? I don't think we do.

    TechRepublic: What is the plan for cybersecurity?

    UN CITO: Well, I've been waiting. I think that is something for all the member states to come together and talk about cybersecurity.

    But what is the plan of us as homosapiens, now we are connected sapiens and very soon we are a combination of carbon and silicon? As super intelligent beings, what is the plan? This is not being talked about. We hope that through the creation of digital Blue Helmet we'd begin a conversation and we'd begin to ask people to contribute positively to what we believe is ethically right. But then again, what we believe is ethically right somebody else may believe is ethically wrong.

    Social Media

    TechRepublic: The UN recently held a conference on social media and terrorism, particularly related to Daesh [ISIS]. What was the discussion about? What takeaways came from that conference?

    UN CITO: Well, we got together as a lot of information and communication professionals, and academics to talk about the big issue of social media and terrorism with Daesh and ISIL. I think this type of dialog is really critical because if we don't talk about these issues, we can't come up with policy recommendations. I think there's a lot of really good discussion about human rights on the internet. "Thou shalt do no harm."

    But we know that whatever policies we come up with, Daesh would be the last group that cares whether you have policies or not. There's deeper discussion about how does youth get attracted to radicalism? You have 50% unemployment of youth. You have major income disparity. I think if we can't begin to address the basic social issues, we're going to have more and more youth attracted to this radicalism. There was good discussion and dialog that we need to address those issues.

    There's some discussion about how do we create the positive message? People, especially youth, want to do something positive. They want to participate. They want to be part of a bigger thing. How do we encourage them? When they look at the negative message, how do you bring in a positive message? Can governments to do something about that?

    Look at the private sector. When there was a Tylenol scare or Toyota speeding on its own, when you went online and you searched for Tylenol, you didn't get all the bad stories about Tylenol. You went into the sites that Tylenol wanted you to go. Search is so powerful, and if you can begin to write positive algorithms, that begins to move the youth to positive messaging.

    Don't try to use marketing or gimmicks because it's so transparent. People see right through it. Governments have a responsibility to provide a positive information space for their youth. There was a lot of good dialog around that.

    On the technology side, I think this is a two year old infant, the internet is amoral, and we can use it for good and use it for bad. You can't shut down the internet. You can't shut down social media. There's a very gray space because, as I said, somebody's freedom fighter is somebody else's terrorist. Is it for Facebook or Twitter to make that decision?

    Artificial intelligence

    TechRepublic: I know you are quite curious about artificial intelligence. Is there a UN policy with respect to AI?

    UN CITO: AI is an amazing thing to talk about, because now you can look at patterns much faster than humans [can]. Do we as technologists have the sophistication of addressing the moral and ethical issues of what's good and bad?

    I think this is what scares me when it comes to AI. Let's say we as humans say, "we want people to be happy and with artificial intelligence, we should build systems for people to be happy." What does that mean?

    I'm looking at the machine language, and the path we're creating for 10, 20, 30 years from now but not fully understanding the ethical programming that we're putting into the systems. IT people are creating the next world. The ethical programming they do is what is in their head, and so policies are being written in lines of code, in the algorithms.

    We look at artificial intelligence and machine learning, and the world we see as technologists 20 years from now is very different than the world we have today. Artificial intelligence is this super, super intelligent species that is not human. Humans have reached our limitation.

    That idea poses so many questions. If we create this artificial intelligence that can do 80% of the labor that humans do, what are the changes? Social, cultural, economic. All of these big, big questions have to be talked about.

    I'm hoping that's the United Nations, but there's so much political opposition to those conversations. So much political opposition because we are holding on to our physical borders, and we have forgotten that those physical borders are gone. The world is virtual. We sit here as heads of departments and ministers and talk about AI. We discuss the moral, the ethical issues that people are going to confront with AI technology—positive and negative.

    Source: TechRepublic

  • What about the relation between AI and machine learning?

    Artificial intelligenceartificial intelligence machine learning is one of the most compelling areas of computer science research. AI technologies have gone through periods of innovation and growth, but never has AI research and development seemed as promising as it does now. This is due in part to amazing developments within machine learning, deep learning, and neural networks.

    Machine learning, a cutting-edge branch of artificial intelligence, is propelling the AI field further than ever before. While AI assistants like Siri, Cortana, and Bixby are useful, if not amusing, applications of AI, they lack the ability to learn, self-correct, and self-improve. 

    They are unable to operate outside of their code, learn independently, and apply past experiences to new problems. Machine learning is changing that. Machines are able to grow outside their original code which allows them to mimic the cognitive processes of the human mind.

    Why is machine learning important for AI? As you have most likely already gathered, machine learning is the branch of AI dedicated to endowing machines with the ability to learn. While there are programs that help sort your email, provide you with personalized recommendations based on your online shopping behavior, and make playlists based on music you like, these programs lack the ability to truly think for themselves. 

    These “weak AI” programs are able to analyze data well and conjure up impressive responses, they are far cry from true artificial intelligence. The only way to arrive at anything close to true artificial intelligence would require a machine to learn. A machine with true artificial intelligence, also known as artificial general intelligence, would be aware of its environment and would manipulate that environment to achieve its goals. A machine with artificial general intelligence would be no different from a human, who is aware of his or her surroundings and uses that awareness to arrive at solutions to problems occurring within those surroundings.

    You may be familiar with the infamous AlphaGo program that beat a professional Go player in 2016 to the chagrin of many professional Go players. While AI has been able to beat chess players in the past, the AI win came as an incredible shock to Go players and AI researchers alike. Surpassing Go players was previously thought to be impossible given that each move in the ancient has almost infinite permutations. Decisions in Go are so intricate and complex that it was thought that the game required human intuition. As it so happens, Go does not require human intuition, it only requires general-purpose learning algorithms.

    How were these general-purpose learning algorithms crafted? The AlphaGo program was created DeepMind Technologies, an AI company acquired by Google in 2014 that managed to create a neural network as well as a model that allowed for machines to mimic short-term memory utilizing researchers as well as C++, Lua, and Python developers. The neural network and the short-term memory model are applications of deep learning, a cutting-edge branch of machine learning.

    Deep learning is an approach to machine learning in which software emulates the human brain. Currently, machine learning applications allow for a machine to train in a certain task by analyzing examples of that task. Deep learning allows for machines to learn in a more general way. So, instead of simply mimicking cognitive functioning in a predefined task, machines are endowed with what can be thought of as a sort of artificial brain. This artificial brain is called a artificial neural network, or neural net for short.

    There are several neural net models in use today, and all use mathematics to copy the structure of the human brain. Neural nets are divided into layers, and consist of thousands, sometimes millions, of interconnected processing nodes. Connections between nodes is given a weight. If the weight is over a predefined threshold, then the node’s data is sent through the next layer. These nodes act as artificial neurons, sharing clusters of data and storing experience and knowledge based on that data, and firing off new bits of information. These nodes interact dynamically and change thresholds and weights as they learn from experience.

    Machine learning and deep learning are exciting and alarming areas of research within AI. Endowing machines with the ability to learn certain tasks could be extremely useful, could increase productivity, and help expedite all sorts of activities, from search algorithms to data mining. Deep learning provides even more opportunities for AI’s growth. As researchers delve deeper into deep learning, we could see machines that understand the mechanics behind learning itself, rather than simply mimicking intellectual tasks

    Author: Greg Robinson

    Source: Information Management

  • Where Artificial Intelligence Is Now and What’s Just Around the Corner

    artificial-intelligence-predictions-2-234x156Unexpected convergent consequences...this is what happens when eight different exponential technologies all explode onto the scene at once.

    This post (the second of seven) is a look at artificial intelligence. Future posts will look at other tech areas.

    An expert might be reasonably good at predicting the growth of a single exponential technology (e.g., the Internet of Things), but try to predict the future when A.I., robotics, VR, synthetic biology and computation are all doubling, morphing and recombining. You have a very exciting (read: unpredictable) future. ​ This year at my Abundance 360 Summit I decided to explore this concept in sessions I called "Convergence Catalyzers."

    For each technology, I brought in an industry expert to identify their Top 5 Recent Breakthroughs (2012-2015) and their Top 5 Anticipated Breakthroughs (2016-2018). Then, we explored the patterns that emerged.

    Artificial Intelligence — Context

    At A360 this year, my expert on AI was Stephen Gold, the CMO and VP of Business Development and Partner Programs at IBM Watson. Here's some context before we dive in.

    Artificial intelligence is the ability of a computer to understand what you're asking and then infer the best possible answer from all the available evidence.

    You may think of AI as Siri or Google Now on your iPhone, Jarvis from Iron Man or IBM's Watson.

    Progress of late is furious — an AI R&D arms race is underway among the world's top technology giants.

    Soon AI will become the most important human collaboration tool ever created, amplifying our abilities and providing a simple user interface to all exponential technologies. Ultimately, it's helping us speed toward a world of abundance.

    The implications of true AI are staggering, and I asked Stephen to share his top five breakthroughs from recent years to illustrate some of them.

    Recent Top 5 Breakthroughs in AI: 2011 - 2015

    "It's amazing," said Gold. "For 50 years, we've ideated about this idea of artificial intelligence. But it's only been in the last few years that we've seen a fundamental transformation in this technology."

    Here are the breakthroughs Stephen identified in artificial intelligence research from 2011-2015:

    1. IBM Watson wins Jeopardy demo's integration of natural language processing, machine learning (ML), and big data.

    In 2011, IBM's AI system, dubbed "Watson," won a game of Jeopardy against the top two all-time champions.

    This was a historic moment, the "Kitty Hawk moment" for artificial intelligence.

    "It was really the first substantial, commercial demonstration of the power of this technology," explained Gold. "We wanted to prove a point that you could bring together some very unique technologies: natural language technologies, artificial intelligence, the context, the machine learning and deep learning, analytics and data and do something purposeful that ideally could be commercialized."

    2. Siri/Google Now redefine human-data interaction.

    In the past few years, systems like Siri and Google Now opened our minds to the idea that we don't have to be tethered to a laptop to have seamless interaction with information.

    In this model, AIs will move from speech recognition to natural language interaction, to natural language generation, and eventually to an ability to write as well as receive information.

    3. Deep learning demonstrates how machines learn on their own, advance and adapt.

    "Machine learning is about man assisting computers. Deep learning is about systems beginning to progress and learn on their own," says Gold. "Historically, systems have always been trained. They've been programmed. And, over time, the programming languages changed. We certainly moved beyond FORTRAN and BASIC, but we've always been limited to this idea of conventional rules and logic and structured data."

    As we move into the area of AI and cognitive computing, we're exploring the ability of computers to do more unaided/unassisted learning.

    4. Image recognition and interpretation now rivals what humans can do — allowing for imagine interpretation and anomaly detection.

    Image recognition has exploded over the last few years. Facebook and Google Photos, for example, each have tens of billions of images on their platform. With this dataset, they (and many others) are developing technologies that go beyond facial recognition providing algorithms that can tell you what is in the image: a boat, plane, car, cat, dog, and so on.

    The crazy part is that the algorithms are better than humans at recognizing images. The implications are enormous. "Imagine," says Gold, "an AI able to examine an X-ray or CAT scan or MRI to report what looks abnormal."

    5. AI Apps proliferate: universities scramble to adopt AI curriculum

    As AI begins to impact every industry and every profession, there is a response where schools and universities are ramping up their AI and machine learning curriculum. IBM, for example, is working with over 150 partners to present both business and technology-oriented students with cognitive computing curricula.

    So what's in store for the near future?

    Anticipated Top AI Breakthroughs: 2016 – 2018

    Here are Gold's predictions for the most exciting, disruptive developments coming in AI in the next three years. As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.

    1. Next-gen A.I. systems will beat the Turing Test

    Alan Turing created the Turing Test over half a century ago as a way to determine a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.

    Loosely, if an artificial system passed the Turing Test, it could be considered "AI."

    Gold believes, "that for all practical purposes, these systems will pass the Turing Test" in the next three-year period.

    Perhaps more importantly, if it does, this event will accelerate the conversation about the proper use of these technologies and their applications.

    2. All five human senses (yes, including taste, smell and touch) will become part of the normal computing experience.

    AIs will begin to sense and use all five senses. "The sense of touch, smell, and hearing will become prominent in the use of AI," explained Gold. "It will begin to process all that additional incremental information."

    When applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses.

    3. Solving big problems: detect and deter terrorism, manage global climate change.

    AI will help solve some of society's most daunting challenges.

    Gold continues, "We've discussed AI's impact on healthcare. We're already seeing this technology being deployed in governments to assist in the understanding and preemptive discovery of terrorist activity."

    We'll see revolutions in how we manage climate change, redesign and democratize education, make scientific discoveries, leverage energy resources, and develop solutions to difficult problems.

    4. Leverage ALL health data (genomic, phenotypic, social) to redefine the practice of medicine.

    "I think AI's effect on healthcare will be far more pervasive and far quicker than anyone anticipates," says Gold. "Even today, AI/machine learning is being used in oncology to identify optimal treatment patterns."

    But it goes far beyond this. AI is being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences.

    5. AI will be woven into the very fabric of our lives — physically and virtually.

    Ultimately, during the AI revolution taking place in the next three years, AIs will be integrated into everything around us, combining sensors and networks and making all systems "smart."

    AIs will push forward the ideas of transparency, of seamless interaction with devices and information, making everything personalized and easy to use. We'll be able to harness that sensor data and put it into an actionable form, at the moment when we need to make a decision.

    Source: SingularityHub

  • Wie domineert straks: de mens of de machine?

    mens of machineDe ontwikkelingen op informatie-technologisch gebied gaan snel en misschien wel steeds sneller. We horen en zien steeds meer van business intelligence, self service BI, artificial intelligence en machine learning. We zien dit terug bij werknemers die steeds meer de beschikking hebben over stuurinformatie via tools, zelfsturende auto’s, robots voor dementerenden maar ook computers die de mens verslaan spelletjes.

    Wat betekent dit?

    • Verdienmodel van bedrijven zullen anders worden
    • Innovaties komen misschien niet meer primair van de mens
    • Veel meer nu nog menselijke arbeid zal door machines worden overgenomen.

    Een paar ontwikkelingen in dit artikel worden uitgelicht om aan te geven hoe belangrijk business intelligence vandaag de dag is.

    Verdienmodel op basis van data

    Dat de informatietechnologie bestaande verdienmodellen op z’n kop zet lezen we dagelijks. We hoeven alleen maar naar V&D te kijken. De hoeveelheid bedrijven  die gebruik maken van een business model waarbij externe dataverzameling en analyse een cruciaal onderdeel is van het verdienmodel neemt hand over hand toe. Zelfs in tot nu toe sterk gedomineerde overheidssectoren zoals onderwijs of gezondheidszorg. Bekende bedrijven, zoals Google en Facebook, zijn overigens zonder concreet verdienmodel begonnen, maar zouden niets meer kunnen zonder genoemde data(analyse).


    Neem bijvoorbeeld een bedrijf als Amazon dat volledig draait op data. De verzamelde data heeft in grote mate betrekking op wie we zijn, hoe we ons gedragen en op onze voorkeuren. Amazon geeft deze data steeds meer betekenis door de toepassingen van de nieuwste technologieën. Een voorbeeld is hoe Amazon zelfs films en boeken ontwikkelt op basis van ons aankoop, kijk- en leesgedrag en hier zal het zeker niet bij blijven. Volgens Gartner is Amazon een van meeste leidende en visionaire spelers in de markt voor Infrastructure as a Service (IaaS). Bovendien prijst Gartner Amazon voor haar snelle manier van anticiperen op de technologische behoeftes uit de markt.


    Volgens de Verenigde Naties zullen de nieuwste innovaties ontstaan vanuit kunstmatige intelligentie. Dit veronderstelt dat de machine de mens passeert met betrekking het bedenken van vernieuwingen. De IBM Watson-computer heeft bijvoorbeeld de mens al verslagen met het spelprogramma Jeopardy. Met moeilijke wiskundige berekening kunnen we niet meer zonder computer, maar dat wil nog niet zeggen dat de computer de mens overal in voorbij streeft. Met de ontwikkeling van zelfsturende auto’s is onlangs aangetoond dat middels machine learning de mens nog steeds leidend kan zijn en per saldo was er veel minder ontwikkelingstijd nodig.

    Mens of machine?

    Een feit is dat de machine steeds meer taken van de mens gaat overnemen en de mens in denkvermogen soms zelfs gaat overtreffen. De mens en machine zullen in de komende periode steeds meer naast elkaar gaan leven en de computer zal het menselijk handelen steeds beter begrijpen en beheersen. Het gevolg is, dat bestaande business modellen zullen gaan veranderen en veel banen in bestaande sectoren verloren zullen gaan. Maar of de computer de mens voorbij streeft en dat in de toekomst zelfs alleen innovatie via kunstmatige intelligentie komt is nog maar de vraag? Ok de industriële revolutie heeft een zeer grote impact op de mensheid gehad en terugkijkend heeft deze vele voordelen gebracht al zal het voor velen in die tijd niet altijd gemakkelijk geweest zijn. Laten we kijken hoe we hier ons voordeel mee kunnen doen. Geïnteresseerd? Klik hier voor meer informatie.

    Ruud Koopmans, RK-Intelligentie.nl, 29 februari 2016


EasyTagCloud v2.8