60 items tagged "artificial intelligence "

  • 'We moeten ons voorbereiden op massawerkloosheid door robots'

    kunstmatige-intelligentie 0De opkomst van robots en kunstmatige intelligentie zorgt ervoor dat steeds meer banen verdwijnen. We moeten ons nu al voorbereiden op massawerkloosheid, waarschuwen hoogleraren.

    We komen in een tijdperk waarin machines bijna alle menselijke taken kunnen overnemen, zei hoogleraar computertechnologie Moshe Vardi van de Universiteit van Texas dit weekend op een Amerikaanse beurs, schrijft FT. Er blijft geen sector vrij van robots, dus de grote vraag volgens de wetenschappers is: als robots ons werk overnemen, wat gaan wij dan doen?

    Vardi voegt daaraan toe dat we allemaal wel leuke dingen kunnen gaan doen, maar dat een leven dat alleen draait om vrije tijd ook niet alles is. "Ik geloof dat werk essentieel is voor het welzijn van mensen."

    'Overheden nog niet klaar voor'
    Bedrijven als Google, Facebook, IBM en Microsoft schalen dit jaar hun investeringen in kunstmatige intelligentie op naar miljarden, maar overheden lijken daar nog niet op voorbereid, stelden experts op de beurs.

    Op initiatief van Bart Selman, professor computertechnologie aan Cornell University, is vorig jaar een open brief opgesteld aan beleidsmakers om erop aan te dringen de risico's in kaart te brengen van de steeds slimmer wordende machines. De brief is ondertekend door 10.000 ondernemers, professoren en technici, waaronder Tesla-oprichter Elon Musk.

    Musk steekt via de non-profitorganisatie OpenAI geld in onderzoek naar kunstmatige intelligentie en hoe mensen er het meest van kunnen profiteren. Hij ziet kunstmatige intelligentie als één van de grootste bedreigingen voor de mensheid.

    Source: RTL Z

  • 2016 wordt het jaar van de kunstmatige intelligentie

    Artificial-intelligence.jpg-1024x678December is traditiegetrouw de periode van het jaar om terug te blikken en oudjaarsdag is daarbij in het bijzonder natuurlijk de beste dag voor. Bij Numrush kijken we echter liever vooruit. Dat deden we begin december al met ons RUSH Magazine. In deze Gift Guide gaven we cadeautips aan de hand van een aantal thema’s waar we komend jaar veel over gaan horen.Eén onderwerp bleef bewust een beetje onderbelicht in onze Gift Guide. Aan de ene kant omdat het niet iets is wat je cadeau geeft, maar ook omdat het eigenlijk de diverse thema’s overstijgt. Ik heb het over kunstmatige intelligentie. Dat is natuurlijk niets nieuws, er is al ontzettend veel gebeurt op dat vlak, maar komend jaar zal de toepassing hiervan nog verder in een stroomversnelling raken.

  • 2017 Investment Management Outlook

    2017 investment management outlook infographic

    Several major trends will likely impact the investment management industry in the coming year. These include shifts in buyer behavior as the Millennial generation becomes a greater force in the investing marketplace; increased regulation from the Securities and Exchange Commission (SEC); and the transformative effect that blockchain, robotic process automation, and other
    emerging technologies will have on the industry.

    Economic outlook: Is a major stimulus package in the offing?

    President-elect Donald Trump may have to depend heavily on private-sector funding to proceed with his $1 trillion infrastructure spending program, considering Congress ongoing reluctance to increase spending. The US economy may be nearing full employment with the younger cohorts entering the labor market as more Baby Boomers retire. In addition, the prospects for a fiscal stimulus seem greater now than they were before the 2016 presidential election.

    Steady improvement and stability is the most likely scenario for 2017. Although weak foreign demand may continue to weigh on growth, domestic demand should be strong enough to provide employment for workers returning to the labor force, as the unemployment rate is expected to remain at approximately 5 percent. GDP annual growth is likely to hit a maximum of 2.5 percent. In the medium term, low productivity growth will likely put a ceiling on the economy, and by 2019, US GDP growth may be below 2 percent, despite the fact that the labor market might be at full employment. Inflation is expected to remain subdued. Interest rates are likely to rise in 2017, but should remain at historically low levels throughout the year. If the forecast holds, asset allocation shifts among cash, commodities, and fixed income may begin by the end of 2017.

    Investment industry outlook: Building upon last year’s performance
    Mutual funds and exchange-traded funds (ETFs) have experienced positive growth. Worldwide regulated funds grew at 9.1 percent CAGR versus 8.6 percent by US mutual funds and ETFs. Non-US investments grew at a slightly faster pace due to global demand. Both worldwide and US investments seem to show declining demand in 2016 as returns remained low.

    Hedge fund assets have experienced steady growth over the past five years, even through performance swings.

    Private equity investments continued a track record of strong asset appreciation. Private equity has continued to attract investment even with current high valuations. Fundraising increased incrementally over the past five years as investors increased allocations in the sector.

    Shifts in investor buying behavior: Here come the Millennials
    Both institutional and retail customers are expected to continue to drive change in the investment management industry. The two customer segments are voicing concerns about fee sensitivity and transparency. Firms that enhance the customer experience and position advice, insight, and expertise as components of value should have a strong chance to set themselves apart from their competitors.

    Leading firms may get out in front of these issues in 2017 by developing efficient data structures to facilitate accounting and reporting and by making client engagement a key priority. On the retail front, the SEC is acting on retail investors’ behalf with reporting modernization rule changes for mutual funds. This focus on engagement, transparency, and relationship over product sales are integral to creating a strong brand as a fiduciary, and they may prove to differentiate some firms in 2017.

    Growth in index funds and other passive investments should continue as customers react to market volatility. Investors favor the passive approach in all environments, as shown by net flows. They are using passive investments alongside active investments, rather than replacing the latter with the former. Managers will likely continue to add index share classes and index-tracking ETFs in 2017, even if profitability is challenged. In addition, the Department of Labor’s new fiduciary rule is expected to promote passive investments as firms alter their product offerings for retirement accounts.

    Members of the Millennial generation—which comprises individuals born between 1980 and 2000—often approach investing differently due to their open use of social media and interactions with people and institutions. This market segment faces different challenges than earlier generations, which influences their use of financial services.

    Millennials may be less prosperous than their parents and may need to own less in order to fully fund retirement. Many start their careers burdened by student debt. They may have a negative memory of recent stock market volatility, distrust financial institutions, favor socially conscious investments, and rely on recommendations from their friends when seeking financial advice.

    Investment managers likely need to consider several steps when targeting Millennials. These include revisiting product lines, offering socially conscious “impact investments,” assigning Millennial advisers to client service teams, and employing digital and mobile channels to reach and serve this market segment.

    Regulatory developments: Seeking greater transparency, incentive alignment, and risk control
    Even with a change in leadership in the White House and at the SEC, outgoing Chair Mary Jo White’s major initiatives are expected to endure in 2017 as they seek to enhance transparency, incentive alignment, and risk control, all of which build confidence in the markets. These changes include the following:

    Reporting modernization. Passed in October 2016, this new requirement of forms, rules, and amendments for information disclosure and standardization will require development by registered investment companies (RICs). Advisers will need technology solutions that can capture data that may not currently exist from multiple sources; perform high-frequency calculations; and file requisite forms with the SEC.

    Liquidity risk management (LRM). Passed in October 2016, this rule requires the establishment of LRM programs by open-end funds (except money market) and ETFs to reduce the risk of inability to meet redemption requirements without dilution of the interests of remaining shareholders.

    Swing pricing. Also passed in October 2016, this regulation provides an option for open-end funds (except money market and ETFs) to adjust net asset values to pass the costs stemming from purchase and redemption activity to shareholders.

    Use of derivatives. Proposed in December 2015, this requires RICs and business development companies to limit the use of derivatives and put risk management measures in place.

    Business continuity and transition plans. Proposed in June 2016, this measure requires registered investment advisers to implement written business continuity and transition plans to address operational risk arising from disruptions.

    The Dodd-Frank Act, Section 956. Reproposed in May 2016, this rule prohibits compensation structures that encourage individuals to take inappropriate risks that may result in either excessive compensation or material loss.

    The DOL’s Conflict-of-Interest Rule. In 2017, firms must comply with this major expansion of the “investment advice fiduciary” definition under the Employee Retirement Income Security Act of 1974. There are two phases to compliance:

    Phase one requires compliance with investment advice standards by April 10, 2017. Distribution firms and advisers must adhere to the impartial conduct standards, provide a notice to retirement investors that acknowledge their fiduciary status, and describes their material conflicts of interest. Firms must also designate a person responsible for addressing material conflicts of interest monitoring advisers' adherence to the impartial conduct standards.

    Phase two requires compliance with exemption requirements by January 1, 2018. Distribution firms must be in full compliance with exemptions, including contracts, disclosures, policies and procedures, and documentation showing compliance.

    Investment managers may need to create new, customized share classes driven by distributor requirements; drop distribution of certain share classes post-rule implementation, and offer more fee reductions for mutual funds.

    Financial advisers may need to take another look at fee-based models, if they are not using already them; evolve their viewpoint on share classes; consider moving to zero-revenue share lineups; and contemplate higher use of ETFs, including active ETFs with a low-cost structure and 22(b) exemption (which enables broker-dealers to set commission levels on their own).

    Retirement plan advisers may need to look for low-cost share classes (R1-R6) to be included in plan options and potentially new low-cost structures.

    Key technologies: Transforming the enterprise

    Investment management poised to become even more driven by advances in technology in 2017, as digital innovations play a greater role than ever before.

    Blockchain. A secure and effective technology for tracking transactions, blockchain should move closer to commercial implementation in 2017. Already, many blockchain-based use cases and prototypes can be found across the investment management landscape. With testing and regulatory approvals, it might take one to two years before commercial rollout becomes more widespread.

    Big data, artificial intelligence, and machine learning. Leading asset management firms are combining big data analytics along with artificial intelligence (AI) and machine learning to achieve two objectives: (1) provide insights and analysis for investment selection to generate alpha, and (2) improve cost effectiveness by leveraging expensive human analyst resources with scalable technology. Expect this trend to gain momentum in 2017.

    Robo-advisers. Fiduciary standards and regulations should drive the adoption of robo-advisers, online investment management services that provide automated, portfolio management advice. Improvements in computing power are making robo-advisers more viable for both retail and institutional investors. In addition, some cutting-edge robo-adviser firms could emerge with AI-supported investment decision and asset allocation algorithms in 2017.

    Robotic process automation. Look for more investment management firms to employ sophisticated robotic process automation (RPA) tools to streamline both front- and back-office functions in 2017. RPA can automate critical tasks that require manual intervention, are performed frequently, and consume a signifcant amount of time, such as client onboarding and regulatory compliance.

    Change, development, and opportunity
    The outlook for the investment management industry in 2017 is one of change, development, and opportunity. Investment management firms that execute plans that help them anticipate demographic shifts, improve efficiency and decision making with technology, and keep pace with regulatory changes will likely find themselves ahead of the competition.

    Download 2017 Investment management industry outlook

    Source: Deloitte.com


  • 5 Predictions for Artificial Intelligence in 2016

    AIGet ready to work alongside smart machines

     At Narrative Science, we love making predictions about innovation, technology and, in particular, the rise of artificial intelligence. We may be a bit too optimistic about the timing of certain technologies going mainstream, but we can’t help it. We are wildly optimistic about the future and genuinely believe that we have entered a dramatically new era of artificial intelligence innovation. That said, this year, we tried to focus our predictions on the near-term. Here’s our best guess as to what will happen in 2016.

    1. New inventions using AI will explode.

    In 2015, artificial intelligence went mainstream. Major tech companies including Google, Facebook, Amazon and Twitter made huge investments in AI, almost all of technology research company Gartner’s strategic predictions included AI, and headlines declared that AI-driven technologies were the next big disruptor to enterprise software. In addition, companies that made huge strides in AI, including Facebook, Microsoft and Google, open-sourced their tools. This makes it likely that in 2016, new inventions will increasingly come to market from companies discovering new ways to apply AI versus building it. With entrepreneurs now having access to low-cost quality AI technologies to create new products, we’ll also likely see an explosion in new startups using AI.

    2. Employees will work alongside smart machines.

    Smart machines will augment work and help employees be more productive, not replace them. Analytics industry leader, Tom Davenport, stated it well when he predicted that “smart leaders will realize that augmentation—combining smart humans with smart machines—is a better strategy than automation.”

    3. Executives will demand transparency.

    Business leaders will realize that smart machines throwing out answers without explanation are of little use. If you walked into a CEO’s office and said we need to shut down three factories, the first question from the CEO would be: “Why?” Just producing a result isn’t enough, and communication capabilities will increasingly be built into advanced analytics and intelligent systems so that these systems can explain how they are arriving at their answers.

    4. Artificial Intelligence will reshape companies outside of IT.

    AI-powered business applications will start to infiltrate companies other than technology firms. Employees, teams and entire departments will champion process re-engineering efforts with these intelligent systems whether they realize it or not. As each individual app eliminates a task, employees will automate many of the mundane parts of their jobs and assemble their own stack of AI-powered apps. Teammates eager to be productive and stay competitive will follow, along with team managers who are looking to execute on cost-cutting efforts.

    5. Innovation labs will become a competitive asset.

    With the pace of innovation accelerating, large organizations in industries such as retail, insurance and government will focus even more energies on remaining competitive and discovering the next big thing by forming innovation labs. Innovation labs have existed for some time, but in 2016, we’ll begin to see more resources devoted to innovation labs and more technologies discovered in the labs actually implemented across different company functions and business lines.

    2016 will be a big year for AI. Much of the work in AI in 2016 will be the catalyst for rapid acceleration of the development and adoption of AI-powered applications. In addition and perhaps even more significant, 2016 will bring about a major shift in the perception of AI. It will cease to be a scary, abstract set of ideas and concepts and will be better understood and accepted as more people realize the potential of AI to augment what we do and make our lives more productive.

    Source: Time

  • 6 Changes in the jobs of marketers and market analysts caused by AI

    6 Changes in the jobs of marketers and market analysts caused by AI

    Artificial intelligence is having a profound impact on the state of marketing in 2019. And AI technology will be even more influential in the years to come.

    If you’re a marketer or a business owner in today’s competitive marketplace, you’ve probably tried just about everything you can think of to maximize your success. You’ve dabbled in digital marketing, visited trade shows, paid for print advertising, and incentivized customer testimonials. It’s probably resulted in lots of stress, sleepless nights, and even CBD oil drops to give you the energy and focus to keep going.

    Marketing requires multiple approaches to succeed, so while you should stick with the things you’ve been doing successfully, you’ll also want to include artificial intelligence (AI) in your current strategy. If you haven’t already, you might be falling behind your competitors. As many as 80% of marketers believe that AI will be the most effective tool by 2020.

    AI marketing techniques increase the efficiency of your marketing and are often more effective than some of the traditional tactics you may be using. You’ll combine big data and inbound marketing to deliver a practical marketing strategy that drives conversions. Here are some ways you can apply this seemingly magical tool:

    1. Customer personas

    The most basic rule about marketing is that you can’t hope to run successful campaigns if you don’t know who you’re targeting. A good marketer will create customer personas that tell you who your target market is and how you can best service them. Personas are made at the basic level by listing demographics, interests, and other information that can help you target an audience.

    About 53% of marketers say that AI is extremely useful in identifying customers. It provides information that you might not have otherwise considered when drafting a marketing strategy. This is extremely valuable since more specific information leads to more effective marketing.

    In order to capture this essential data, look through your company analytics. Define the demographics of those who follow you on social media, make purchases on your website, and comment or inquire about your products/services. This essential data can develop a more profound persona designed to target the right customer base.

    2. Digital advertising campaigns

    Many marketers have heard about the essentials of a digital advertising campaign in furthering sales, but they haven’t seen the results they hoped for. Artificial intelligence can significantly improve these campaigns. Once you’ve created a comprehensive view of your customer base, you’ll experience far more effective digital advertising campaigns.

    A great example of this is Facebook advertising, which is named by many marketing experts as the best bang for your buck. It allows you to create advertisements that are specifically targeted towards those who are most likely to make a purchase. However, it only works if you know exactly who your target audience is.

    Thanks to the abundance of consumer data collected by websites, social sites, and keyword searches, you’ll have all the information you need for more effective digital ads.

    3. Automated e-mail and SMS campaigns

    E-mail and SMS marketing are considered some of the best lead-generating marketing tactics out there. E-mail is the number-one source of business communication with 86% of consumers and business professionals reporting it as their preferred source. More importantly for sales, nearly 60% say it’s their most effective channel for revenue generation.

    SMS marketing, although not as popular as e-mail among marketers, boasts similar data for millennial clients, or those aged 18-36. Thanks to AI, we know that about 83% of millennials open an SMS message within 90 seconds of receiving it. Three quarters say they prefer SMS for promotions, surveys, reminders, and similar communications from brands.

    With the help of AI, we do not only understand the essentials of e-mail and SMS for marketing, but we have the insights that help to make it better. AI-enabled tools facilitate targeted campaigns to a specific audience. They handle the busy work behind these campaigns so that you can focus more on developing products and customer service.

    4. Market research

    Savvy marketers begin every new campaign with market research, gathering information about customers, effective marketing strategies, and trends in the industry. This information is invaluable for directing campaigns effectively and making products more appealing to the intended audience.

    Big data provides all that information for you, although it’s difficult to understand it all on the surface. There’s so much information that you’ll need analytics tools to decipher the most useful data that can be used to direct your marketing efforts.

    Once you’ve broadened your horizons with data-deciphering tools, you’ll have an easier time interpreting customer emotions and their perceptions of your brand. You’ll be able to make changes or continue implementing an effective strategy with this insightful information.

    5. User experience

    As business owners know, it’s all about the user experience. A good marketing campaign begins with a website and advertisements designed specifically for customers’ benefits. In fact, customers are beginning to demand information, products, and services at lightning speed. AI can help you give that to them.

    One example is the use of chatbots for customer service. When customers reach out to you on Facebook Messenger, for example, you can set up a chatbot to respond immediately and let them know you’ll be with them shortly.

    Another example is personalization that comes through AI. As you know your audience better, you can set your advertisements and website experiences to be catered to the individual. Each time they log onto your website, they’ll be greeted by name and advertisements all over the web will show them only the things they’re interested in seeing. e-mail marketing will improve with personalization as well.

    Social media and Google advertisements are all about catering more directly to the user experience. The data you collect about individual consumers all but guarantees your ads will be shown to the right people.

    6. Sales forecasting

    Fruitful marketing drives sales, a metric that’s easier to forecast and understand with the use of AI. Marketers can use all the information derived from inbound communication and compare it to traditional metrics in order to determine updates and improvements for sales strategies.

    It can show you a forecasting of the results of a certain metric, so you can determine if it’s worth the expense to do so. This can save marketers significant time and money in the industry all while driving more sales and growth as a result.

    AI is redefining the state of marketing

    Artificial intelligence is having a profound impact on the state of marketing in 2019. And AI technology will be even more influential in the years to come. Make sure that you understand its impact and find ways to utilize it to its full potential.

    Author: Diana Hope

    Source: SmartDataCollective

  • A brief look into Reinforcement Learning

    A brief look into Reinforcement Learning

    Reinforcement Learning (RL) is a very interesting topic within Artificial Intelligence, and the concept is quite fascinating. In this post I will try to give a nice initial picture for those who want to know more about RL.

    What is Reinforcement Learning?

    Conceptually, RL is a framework that describes systems (here called agents) that are able to learn how to interact with the surrounding environment only by means of gathered experience. After each action (or interaction), the agent earns some reward, a feedback from the environment that quantify the quality of that given action.

    Humans learn by the same principle. Think about a baby walking around. For this bay, everything is new. How can a baby know that grabbing something hot is dangerous? Of course, after touching this hot object he can get a painful burn. With this bad reward (or punishment) the baby will learn that it is good to avoid touching anything too hot.

    It is important to point out that the terms agent and environment must be interpreted in a broader sense. It is easier to visualize the agent as something like a robot and the environment as the place where it is situated in. This is a right analogy, however it can be much more complex. I like to think that the agent is like a controller in a closed loop system: It is basically an algorithm responsible for making decisions. The environment can be anything that the agent interacts with.

    A simple example to help you understand

    For a better understanding I will use a simple example here. Imagine a wheeled robot inside of a maze, trying to learn how to reach a goal marker. However, some obstacles are in its way. The aim is that the agent learns how to reach the goal without crashing into the obstacles. So, let's highlight the main components that compose this RL problem:

    • Agent: The decision making system. The robot, in our example.
    • Environment: A system which the agent interacts with. The maze, in this case.
    • State: For the agent to choose how to behave, it is necessary to estimate the environment state. For each state, it should exist an optimal action for the agent to choose. It can be the robot position, or some obstacle detected by the sensors.
    • Action: This is how the agent interacts with the environment. Usually there is a finite number of actions that the agent is able to perform. In our example it is the direction that the robot should move to.
    • Reward: It is the feedback that allows the agent to know if the action was good or not. A bad reward (it can be a low or negative value) can be also interpreted as a punishment. The main goal of RL algorithms is to maximize the long-term reward. If the robot achieves the goal mark, a big reward should be given. However, if it crashes into an obstacle, a punishment should be given instead.
    • Episode: Most of the RL problems are episodic. The meaning is that it has to exist some event that terminates the episode execution. In our example the episode should finish when the robot reaches the goal or if some time limit is exceeded (to avoid the robot to stay still forever).

    Usually, it is supposed that the agent has no previous knowledge about the environment. Therefore, in the beginning actions will be chosen randomly. For each wrong decision the agent will be punished (for example, by crashing into an obstacle). Good decisions will be rewarded, on the other hand. The learning happens by the agent figuring out how to avoid getting into situations where punishment may occur and choosing actions that will allow the agent to find the goal.

    The reward accumulated in each episode is expected to increase and can be used to estimate the agent’s learning rate. After many episodes, the robot should be able to know how to behave in order to find the goal marker while avoiding any occasional obstacle with no previous information about the environment. Of course there are many other things to be considered, but let’s keep it simple for now.

    Author: Felp Roza

    Source: Towards Data Science

  • A Shortcut Guide to Machine Learning and AI in The Enterprise


    Predictive analytics / machine learning / artificial intelligence is a hot topic – what’s it about?

    Using algorithms to help make better decisions has been the “next big thing in analytics” for over 25 years. It has been used in key areas such as fraud the entire time. But it’s now become a full-throated mainstream business meme that features in every enterprise software keynote — although the industry is battling with what to call it.

    It appears that terms like Data Mining, Predictive Analytics, and Advanced Analytics are considered too geeky or old for industry marketers and headline writers. The term Cognitive Computing seemed to be poised to win, but IBM’s strong association with the term may have backfired — journalists and analysts want to use language that is independent of any particular company. Currently, the growing consensus seems to be to use Machine Learning when talking about the technology and Artificial Intelligence when talking about the business uses.

    Whatever we call it, it’s generally proposed in two different forms: either as an extension to existing platforms for data analysts; or as new embedded functionality in diverse business applications such as sales lead scoring, marketing optimization, sorting HR resumes, or financial invoice matching.

    Why is it taking off now, and what’s changing?

    Artificial intelligence is now taking off because there’s a lot more data available and affordable, powerful systems to crunch through it all. It’s also much easier to get access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.

    Organizations today have also more comfortable with manipulating business data, with a new generation of business analysts aspiring to become “citizen data scientists.” Enterprises can take their traditional analytics to the next level using these new tools.

    However, we’re now at the “Peak of Inflated Expectations” for these technologies according to Gartner’s Hype Cycle — we will soon see articles pushing back on the more exaggerated claims. Over the next few years, we will find out the limitations of these technologies even as they start bringing real-world benefits.

    What are the longer-term implications?

    First, easier-to-use predictive analytics engines are blurring the gap between “everyday analytics” and the data science team. A “factory” approach to creating, deploying, and maintaining predictive models means data scientists can have greater impact. And sophisticated business users can now access some the power of these algorithms without having to become data scientists themselves.

    Second, every business application will include some predictive functionality, automating any areas where there are “repeatable decisions.” It is hard to think of a business process that could not be improved in this way, with big implications in terms of both efficiency and white-collar employment.

    Third, applications will use these algorithms on themselves to create “self-improving” platforms that get easier to use and more powerful over time (akin to how each new semi-autonomous-driving Tesla car can learn something new and pass it onto the rest of the fleet).

    Fourth, over time, business processes, applications, and workflows may have to be rethought. If algorithms are available as a core part of business platforms, we can provide people with new paths through typical business questions such as “What’s happening now? What do I need to know? What do you recommend? What should I always do? What can I expect to happen? What can I avoid? What do I need to do right now?”

    Fifth, implementing all the above will involve deep and worrying moral questions in terms of data privacy and allowing algorithms to make decisions that affect people and society. There will undoubtedly be many scandals and missteps before the right rules and practices are in place.

    What first steps should companies be taking in this area?
    As usual, the barriers to business benefit are more likely to be cultural than technical.

    Above all, organizations need to make sure they have the right technical expertise to be able to navigate the confusion of new vendors offers, the right business knowledge to know where best to apply them, and the awareness that their technology choices may have unforeseen moral implications.

    Source: timoelliot.com, October 24, 2016


  • A three-stage approach to make your business AI ready

    A three-stage approach to make your business AI ready

    Organizations implementing artificial intelligence (AI) have increased by 270% over the last four years, according to a recent survey by Gartner. Even though the implementation of AI is a growing trend, 63% of organizations haven’t deployed this technology. What is holding them back: cost? talent shortage? something else?

    For many organizations it is the inability to reach the desired confidence level in the algorithm itself. Data science teams often blow their budget, time and resources on AI models that never make it out of the beginning stages of testing. And even if projects make it out of the initial stage, not all projects are successful.

    One example we saw last year was Amazon’s attempt to implement AI in their HR department. Amazon received a huge number of resumes for their thousands of open positions. They hypothesized that they could use machine learning to go through all of the resumes and find the top talent. While the system was able to filter the resumes and apply scores to the candidates, it also showed gender bias. While this proof of concept was approved, they didn’t watch for bias in their training data and the project was recalled.

    Companies want to jump on the “Fourth Industrial revolution” bandwagon and prove that AI will deliver ROI for their businesses. The truth is AI is in its early stages and many companies are just now getting AI ready. For machine learning (ML) project teams that are starting a project for the first time, a deliberate, three-stage approach to project evolution will pave a shortcut to success:

    1. Test the fundamental efficacy of your model with an internal Proof of Concept (POC)

    The point of a POC is to prove that in a certain case it is possible to save money or improve a customer experience using AI. You are not attempting to get the model to the level of confidence needed to deploy it, but just to say (and show) the project can work.

    A POC like this is all about testing things to see if a given approach produces results. There is no sense in making deep investments for a POC. You can use an off-the-shelf algorithm, find open source training data, purchase a sample dataset, create your own algorithm with limited functionality, and/or label your own data. Find what works for you to prove that your project will achieve the intended corporate goal. A successful POC is what is going to get the rest of the project funded.

    In the grand scheme of your AI project, this step is the easiest part of your journey. Keep in mind, as you get further into training your algorithm, you will not be able to use sample data or prepare all of your training data yourself. The subsequent improvements in model confidence required to make your system production ready will take immense amounts of training data.

    2. Prepare the data you’ll need to train your algorithm… and keep going

    In this step the hard work really begins. Let’s say that your POC using pre-labeled data got your model to a 60% confidence. 60% is not ready for primetime. In theory, that could mean that 40 percent of the interactions your algorithm has with customers will be unsatisfactory. How to reach a higher level of confidence? More training data.

    Proving AI will work for your business is a huge step toward implementing it and actually reaping the benefits. But don’t let it lull you into thinking the next 10% confidence is going to be 6x easier than that. The ugly truth is that models have an insatiable appetite for training data and getting from 60% to 70% confidence could take more training data that it took to get to the original 60 percent. The needs become exponential. 

    3. Watch out for possible roadblocks

    Imagine: if it took tens of thousands of labeled images to prove one use case for a successful POC, it is going to take tens of thousands of images for each use case you need your algorithm to learn. How many use cases is that? Hundreds? Thousands? There are edge cases that will continually arise, and each of those will require training data. And on and on. It is understandable that data science teams often underestimate the quantity of training data they will need and attempt to do the labeling and annotating in-house. This could also partially account for why data scientists are leaving their jobs.

    While not enough training data is one common pitfall, there are others. It is essential that you are watching for and eliminating any sample, measurement, algorithm, or prejudicial bias in your training data as you go. You’ll want to implement agile practices to catch these things early and make adjustments.

    And one final thing to keep in mind,=: AI labs, data scientists, AI teams, and training data are expensive. Yet, in a Gartner report that says that AI projects are in the top three priorities, it also states that AI is thirteenth on their list of funding priorities. Yes, you’re going to need a bigger budget.

    Author: Glen Ford

    Source: Dataconomy

  • AI and the risks of Bias

    BIAS cartoon006

    From facial recognition for unlocking our smartphones to speech recognition and intent analysis for voice assistance, artificial intelligence is all around us today. In the business world, AI is helping us uncover new insight from data and enhance decision-making.

    For example, online retailers use AI to recommend new products to consumers based on past purchases. And, banks use conversational AI to interact with clients and enhance their customer experiences.

    However, most of the AI in use now is “narrow AI,” meaning it is only capable of performing individual tasks. In contrast, general AI – which is not available yet – can replicate human thought and function, taking emotions and judgment into account. 

    General AI is still a way off so only time will tell how it will perform. In the meantime, narrow AI does a good job at executing tasks, but it comes with limitations, including the possibility of introducing biases.  

    AI bias may come from incomplete datasets or incorrect values. Bias may also emerge through interactions overtime, skewing the machine’s learning. Moreover, a sudden business change, such as a new law or business rule, or ineffective training algorithms can also cause bias. We need to understand how to recognize these biases, and design, implement and govern our AI applications in order to make sure the technology generates its desired business outcomes.

    Recognize and evaluate bias – in data samples and training

    One of the main drivers of bias is the lack of diversity in the data samples used to train an AI system. Sometimes the data is not readily available or it may not even exist, making it hard to address all potential use cases.

    For instance, airlines routinely apply sensor data from in-flight aircraft engines through AI algorithms to predict needed maintenance and improve overall performance. But if the machine is trained with only data from flights over the Northern Hemisphere and then applied to a flight across sub-Saharan Africa, the conditions will provide inaccurate results. We need to evaluate the data used to train these systems and strive for well-rounded data samples.

    Another driver of bias is incomplete training algorithms. For example, a chatbot designed to learn from conversations may be exposed to politically incorrect language. Unless trained not to, the chatbot may start using the same language with consumers, which Microsoft unfortunately learned in 2016 with its now-defunct Twitter bot, “Tay.” If a system is incomplete or skewed through learning like Tay, then teams have to adjust the use case and pivot as needed.

    Rushed training can also lead to bias. We often get excited about introducing AI into our businesses so naturally want to start developing projects and see some quick wins. 

    However, early applications can quickly expand beyond their intended purpose. Given that current AI cannot cover the gamut of human thought and judgement, eliminating emerging biases becomes a necessary task. Therefore, people will continue to be important in AI applications. Only people have the domain knowledge – acquired industry, business, and customer knowledge – needed to evaluate the data for biases and train the models accordingly.

    Diversify datasets and the teams working with AI

    Diversity is the key to mitigating AI biases – diversity in the datasets and the workforce working day to day with the models. As stated above, we need to have comprehensive, well-rounded datasets that can broadly cover all possible use cases. If there is underrepresented or disproportionate internal data, such as if the AI only has homogenous datasets, then external sources may fill in the gaps in information. This gives the machine a richer pool of data to learn and work with – and leads to predictions that are far more accurate. 

    Likewise, diversity in the teams working with AI can help mitigate bias. When there is only a small group within one department working on an application, it is easy for the thinking of these individuals to influence the system’s design and algorithms. Starting with a diverse team or introducing others into an existing group can make for a much more holistic solution. A team with varying skills, thinking, approaches and backgrounds is better equipped to recognize existing AI bias and anticipate potential bias. 

    For example, one bank used AI to automate 80 percent of its financial spreading process for public and private companies. It involved extracting numbers out of documents and formatting them into templates, while logging each step along the way. To train the AI and make sure the system pulled the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decisioning expertise. Today, it applies AI to spreading on 45,000 customer accounts across 35 countries.

    Consider emerging biases and preemptively train the machine

    While AI can introduce biases, proper design (including the data samples and models) and thoughtful usage (such as governance over the AI’s learning) can help reduce and prevent them. And, in many situations, AI can actually minimize bias that would otherwise be present in human decision-making. An objective algorithm can compensate for the natural bias that a human might introduce such in approving a customer for a loan based on their appearance.

    In recruiting, an AI program can review job descriptions to eliminate unconscious gender biases by flagging and removing words that may be construed as more masculine or feminine, and replacing them with more neutral terms. It is important to note that a domain expert needs to go in and make sure the changes are still accurate, but the system can recognize things that people could miss. 

    Bias is an unfortunate reality in today’s AI applications. But by evaluating the data samples and training algorithms and making sure that both are comprehensive and complete, we can mitigate unintended biases. We need to task diverse teams with governing the machines to prevent unwanted outcomes. With the right protocol and measures, we can ensure that AI delivers on its promise and yields the best business results

     Author: Sanjay Srivastava

    Source: Information Management

  • AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    Artificial Intelligence (AI) zou het belangrijkste aspect moeten zijn van een datastrategie. Dat vindt meer dan 60 procent van de marketeers, blijkt uitonderzoek van MemSQL. Maar het daadwerkelijk inzetten van AI blijkt een ander verhaal. Hoe kunnen bedrijven AI omzetten in een succesvolle strategie? Hier volgen 8 tips voor marketeers:

    1. Recommendation engines

    Richt je op upselling door recommendation engines in te zetten. Recommendation engines zijn gebouwd om te voorspellen wat gebruikers op basis van hun zoektermen verder interessant zouden kunnen vinden, met name als er veel keuze is. Recommendatin engines tonen gebruikers informatie of inhoud die ze anders misschien niet hadden gezien, wat uiteindelijk kan leiden tot hogere inkomsten uit meer verkopen. Naarmate er meer bekend is over een bezoeker, is een steeds betere aanbeveling te doen en daarmee wordt de verkoopkans steeds groter. Zo is meer dan 80 procent van de programma’s die mensen kijken op Netflix door hen gevonden via de recommendation engine. Hoe dit werkt? Ten eerste verzamelt Netflix alle data van zijn gebruikers. Wat kijken ze? Wat keken ze vorig jaar? Welke series kijken na elkaar? En ga zo maar door. Bovendien is er een groep freelance en in house taggers actief, die alle content van beoordelingen en tags voorzien. Speelt een serie zich af in de ruimte of is de held een politieman? Alles krijgt een tag. Vervolgens worden machine learning algoritmes losgelaten op deze gecombineerde data en worden kijkers opgedeeld in meer dan 2000 verschillende ‘smaakgroepen’. De groep waarin een gebruiker is ingedeeld bepaalt welke kijkvoorstellen hij/zij krijgt.

    2. Forecasting

    Goede salesprognoses helpen bedrijven te groeien. Maar voorspellingen worden al jarenlang door mensen gedaan, terwijl emoties een kwartaal kunnen maken of breken. Zonder wetenschap zijn voorspellingen vaak ofwel overdreven optimistisch, ofwel overdreven pessimistisch. AI kan helpen met forecasting louter gebaseerd op gegevens en feiten. Deze gevgevens en feiten zijn met dank aan AI ook uit te leggen, waardoor bedrijven kunnen leren van eerdere voorspellingen en de volgende prognose alleen maar nauwkeuriger wordt.

    3. Ga ‘churn’ tegen

    Zoals iedere marketeer weet is het werven van nieuwe klanten veel duurder dan het behouden van de huidige klanten. Maar hoe voorkom je dat klanten zich uitschrijven voor je diensten of kiezen voor andere oplossingen? Zorg dat je klanten die de website willen verlaten steeds beter begrijpt en hun gedrag kunt voorspellen, want daarmee is klantverlies te minimaliseren. Wanneer je klanten die op het punt staan je website te verlaten effectief aanspreekt, vergroot je de kans op conversie. Door met behulp van AI een voorspellend analysemodel te bouwen dat potentiële ‘churners’ detecteert en hier vervolgens een marketingcampagne op in te zetten, voorkom je klantverlies en kun je veranderingen in je product aanbrengen om churn tegen te gaan.

    4. Content generation

    Content blijft koning. En daar kun je op inspelen met Natural Language Processing (NLP). Dit is de vaardigheid van een computerprogramma om menselijke taal te begrijpen. NLP zal zich in de nabije toekomst steeds verder ontwikkelen en wordt meer mainstream. Doordat computers taal steeds beter begrijpen, kan simpele content steeds beter automatisch gegenereerd worden. Dat content enorm belangrijk blijft, blijkt uit onderzoek van het Content Marketing Institute (CMI). Content marketing blijkt wel drie keer zo veel leads per uitgegeven dollar op te leveren als het genereren van betaalde zoekopdrachten! Bovendien kost content marketing minder terwijl het tegelijkertijd grotere langetermijnvoordelen biedt.

    5. Hyper-Targeted advertising

    Klanten hebben steeds meer toegang tot informatie en worden met een overschot aan keuzes minder loyaal aan een product of merk. De klantervaring die een bedrijf biedt is steeds belangrijker, dus ook advertenties moeten aanvoelen als een persoonlijk aanbod. Uit onderzoek van SalesForce blijkt dat 51 procent van de consumenten verwacht dat bedrijven rond 2020 zullen anticiperen op hun behoeften en actief relevante suggesties doen, oftewel hyper-targeted advertising inzetten. Zet daarom AI in voor data-driven klantsegmentatie en maak advertenties steeds relevanter per doelgroep.

    6. Prijsoptimalisatie

    McKinsey schat dat zo’n 30% van alle prijsbeslissingen die bedrijven elk jaar maken niet leiden tot de optimale prijs. Om concurrerend te blijven is het van belang continu het evenwicht te vinden tussen wat klanten willen betalen voor een product/dienst en wat de winstmarges aan kunnen. Grote bedrijven tonen aan dat prijsoptimalisatie vaak cruciaal is voor hun succes. Naar verluidt wijzigt Walmart zijn prijzen wel meer dan 50.000 keer per maand. Door met behulp van AI dynamische prijsbepaling in te zetten, zijn prijzen continu te updaten op basis van veranderende factoren en ben je niet meer afhankelijk van statische gegevens.

    7. Scoor betere leads

    Zet voorspellende lead scoring in om betere leads te scoren en daarmee alle pijlen te richten op diegenen die het meest waarschijnlijk zullen kopen. Uit een IDC-enquête blijkt dat 83 procent van de bedrijven voorspellende lead scoring voor verkoop en marketing al gebruikt of van plan is te gebruiken. En met de hulp van AI is daar een enorme slag in te slaan. Voorspellende lead scoring is speciaal ontwikkeld om te bepalen welke criteria bij een goede lead horen. Het maakt gebruik van algoritmes die vast kunnen stellen welke eigenschappen geconverteerde leads en niet-geconverteerde leads met elkaar gemeen hebben. Met die kennis kan lead scoring-software verschillende modellen voor voorspellende lead scoring maken en testen, en vervolgens automatisch het model kiezen dat het meest geschikt is voor een set voorbeeldgegevens. Omdat lead scoring-software ook machine learning gebruikt worden lead scores steeds nauwkeuriger.

    8. Marketingattributie

    En tot slot: begrijp tot op de details waar de beste (en slechtste) conversies vandaan komen, zodat je hiermee aan de slag kunt gaan. Met conversieattributie is goed te meten via welke website, zoekmachine, advertentie etc. een bezoeker op jouw website kwam en hier wel of niet een bestelling plaatste. Met behulp va machine learning kun je een slimmer marketingattributiesysteem bouwen, waarmee precies geïdentificeerd kan worden wat individuen beïnvloedt om gewenst gedrag te vertonen. In dit geval is overgaan tot koop het gewenste gedrag. Een goed marketingattributiesysteem met behulp van AI kan dus zorgen voor meer conversie.

    Auteur: Hylke Visser

    Bron: Emerce

  • An overview of Morgan Stanley's surge toward data quality

    An overview of Morgan Stanley's surge toward data quality

    Jeff McMillan, chief analytics and data officer at Morgan Stanley, has long worried about the risks of relying solely on data. If the data put into an institution's system is inaccurate or out of date, it will give customers the wrong advice. At a firm like Morgan Stanley, that just isn't an option.

    As a result, Morgan Stanley has been overhauling its approach to data. Chief among them is that it wants to improve data quality in core business processing.

    “The acceleration of data volume and the opportunity this data presents for efficiency and product innovation is expanding dramatically,” said Gerard Hester, head of the bank’s data center of excellence. “We want to be sure we are ahead of the game.”

    The data center of excellence was established in 2018. Hester describes it as a hub with spokes out to all parts of the organization, including equities, fixed income, research, banking, investment management, wealth management, legal, compliance, risk, finance and operations. Each division has its own data requirements.

    “Being able to pull all this data together across the firm we think will help Morgan Stanley’s franchise internally as well as the product we can offer to our clients,” Hester said.

    The firm hopes that improved data quality will let the bank build higher quality artificial intelligence and machine learning tools to deliver insights and guide business decisions. One product expected to benefit from this is the 'next best action' the bank developed for its financial advisers.

    This next best action uses machine learning and predictive analytics to analyze research reports and market data, identify investment possibilities, and match them to individual clients’ preferences. Financial advisers can choose to use the next best action’s suggestions or not.

    Another tool that could benefit from better data is an internal virtual assistant called 'ask research'. Ask research provides quick answers to routine questions like, “What’s Google’s earnings per share?” or “Send me your latest model for Google.” This technology is currently being tested in several departments, including wealth management.

    New data strategy

    Better data quality is just one of the goals of the revamp. Another is to have tighter control and oversight over where and how data is being used, and to ensure the right data is being used to deliver new products to clients.

    To make this happen, the bank recently created a new data strategy with three pillar. The first is working with each business area to understand their data issues and begin to address those issues.

    “We have made significant progress in the last nine months working with a number of our businesses, specifically our equities business,” Hester said.

    The second pillar is tools and innovation that improve data access and security. The third pillar is an identity framework.

    At the end of February, the bank hired Liezel McCord to oversee data policy within the new strategy. Until recently, McCord was an external consultant helping Morgan Stanley with its Brexit strategy. One of McCord’s responsibilities will be to improve data ownership, to hold data owners accountable when the data they create is wrong and to give them credit when it’s right.

    “It’s incredibly important that we have clear ownership of the data,” Hester said. “Imagine you’re joining lots of pieces of data. If the quality isn’t high for one of those sources of data, that could undermine the work you’re trying to do.”

    Data owners will be held accountable for the accuracy, security and quality of the data they contribute and make sure that any issues are addressed.

    Trend of data quality projects

    Arindam Choudhury, the banking and capital markets leader at Capgemini, said many banks are refocusing on data as it gets distributed in new applications.

    Some are driven by regulatory concerns, he said. For example, the Basel Committee on Banking Supervision's standard number 239 (principles for effective risk data aggregation and risk reporting) is pushing some institutions to make data management changes.

    “In the first go-round, people complied with it, but as point-to-point interfaces and applications, which was not very cost effective,” Choudhury said. “So now people are looking at moving to the cloud or a data lake, they’re looking at a more rationalized way and a more cost-effective way of implementing those principles.”

    Another trend pushing banks to get their data house in order is competition from fintechs.

    “One challenge that almost every financial services organization has today is they’re being disintermediated by a lot of the fintechs, so they’re looking at assets that can be used to either partner with these fintechs or protect or even grow their business,” Choudhury said. “So they’re taking a closer look at the data access they have. Organizations are starting to look at data as a strategic asset and try to find ways to monetize it.”

    A third driver is the desire for better analytics and reports.

    "There’s a strong trend toward centralizing and figuring out, where does this data come from, what is the provenance of this data, who touched it, what kinds of rules did we apply to it?” Choudhury said. That, he said, could lead to explainable, valid and trustworthy AI.

    Author: Penny Crosman

    Source: Information-management

  • Artificial intelligence: Can Watson save IBM?

    160104-Cloud-800x445The history of artificial intelligence has been marked by seemingly revolutionary moments — breakthroughs that promised to bring what had until then been regarded as human-like capabilities to machines. The AI highlights reel includes the “expert systems” of the 1980s and Deep Blue, IBM’s world champion-defeating chess computer of the 1990s, as well as more recent feats like the Google system that taught itself what cats look like by watching YouTube videos.

    But turning these clever party tricks into practical systems has never been easy. Most were developed to showcase a new computing technique by tackling only a very narrow set of problems, says Oren Etzioni, head of the AI lab set up by Microsoft co-founder Paul Allen. Putting them to work on a broader set of issues presents a much deeper set of challenges.
    Few technologies have attracted the sort of claims that IBM has made for Watson, the computer system on which it has pinned its hopes for carrying AI into the general business world. Named after Thomas Watson Sr, the chief executive who built the modern IBM, the system first saw the light of day five years ago, when it beat two human champions on an American question-and-answer TV game show, Jeopardy!
    But turning Watson into a practical tool in business has not been straightforward. After setting out to use it to solve hard problems beyond the scope of other computers, IBM in 2014 adapted its approach.
    Rather than just selling Watson as a single system, its capabilities were broken down into different components: each of these can now be rented to solve a particular business problem, a set of 40 different products such as language-recognition services that amount to a less ambitious but more pragmatic application of an expanding set of technologies.
    Though it does not disclose the performance of Watson separately, IBM says the idea has caught fire. John Kelly, an IBM senior vice-president and head of research, says the system has become “the biggest, most important thing I’ve seen in my career” and is IBM’s fastest growing new business in terms of revenues.
    But critics say that what IBM now sells under the Watson name has little to do with the original Jeopardy!-playing computer, and that the brand is being used to create a halo effect for a set of technologies that are not as revolutionary as claimed.

    “Their approach is bound to backfire,” says Mr Etzioni. “A more responsible approach is to be upfront about what a system can and can’t do, rather than surround it with a cloud of hype.”
    Nothing that IBM has done in the past five years shows it has succeeded in using the core technology behind the original Watson demonstration to crack real-world problems, he says.

    Watson’s case
    The debate over Watson’s capabilities is more than just an academic exercise. With much of IBM’s traditional IT business shrinking as customers move to newer cloud technologies, Watson has come to play an outsized role in the company’s efforts to prove that it is still relevant in the modern business world. That has made it key to the survival of Ginni Rometty, the chief executive who, four years after taking over, is struggling to turn round the company.
    Watson’s renown is still closely tied to its success on Jeopardy! “It’s something everybody thought was ridiculously impossible,” says Kris Hammond, a computer science professor at Northwestern University. “What it’s doing is counter to what we think of as machines. It’s doing something that’s remarkably human.”

    By divining the meaning of cryptically worded questions and finding answers in its general knowledge database, Watson showed an ability to understand natural language, one of the hardest problems for a computer to crack. The demonstration seemed to point to a time when computers would “understand” complex information and converse with people about it, replicating and eventually surpassing most forms of human expertise.
    The biggest challenge for IBM has been to apply this ability to complex bodies of information beyond the narrow confines of the game show and come up with meaningful answers. For some customers, this has turned out to be much harder than expected.
    The University of Texas’s MD Anderson Cancer Center began trying to train the system three years ago to discern patients’ symptoms so that doctors could make better diagnoses and plan treatments.
    “It’s not where I thought it would go. We’re nowhere near the end,” says Lynda Chin, head of innovation at the University of Texas’ medical system. “This is very, very difficult.” Turning a word game-playing computer into an expert on oncology overnight is as unlikely as it sounds, she says.

    Part of the problem lies in digesting real-world information: reading and understanding reams of doctors’ notes that are hard for a computer to ingest and organise. But there is also a deeper epistemological problem. “On Jeopardy! there’s a right answer to the question,” says Ms Chin but, in the
    medical world, there are often just well-informed opinions.
    Mr Kelly denies IBM underestimated how hard challenges like this would be and says a number of medical organisations are on the brink of bringing similar diagnostic systems online.

    Applying the technology
    IBM’s initial plan was to apply Watson to extremely hard problems, announcing in early press releases “moonshot” projects to “end cancer” and accelerate the development of Africa. Some of the promises evaporated almost as soon as the ink on the press releases had dried. For instance, a far-reaching partnership with Citibank to explore using Watson across a wide range of the bank’s activities, quickly came to nothing.
    Since adapting in 2014, IBM now sells some services under the Watson brand. Available through APIs, or programming “hooks” that make them available as individual computing components, they include sentiment analysis — trawling information like a collection of tweets to assess mood — and personality tracking, which measures a person’s online output using 52 different characteristics to come up with a verdict.

    At the back of their minds, most customers still have some ambitious “moonshot” project they hope that the full power of Watson will one day be able to solve, says Mr Kelly; but they are motivated in the short term by making improvements to their business, which he says can still be significant.
    This more pragmatic formula, which puts off solving the really big problems to another day, is starting to pay dividends for IBM. Companies like Australian energy group Woodside are using Watson’s language capabilities as a form of advanced search engine to trawl their internal “knowledge bases”. After feeding more than 20,000 documents from 30 years of projects into the system, the company’s engineers can now use it to draw on past expertise, like calculating the maximum pressure that can be used in a particular pipeline.
    To critics in the AI world, the new, componentised Watson has little to do with the original breakthrough and waters down the technology. “It feels like they’re putting a lot of things under the Watson brand name — but it isn’t Watson,” says Mr Hammond.
    Mr Etzioni goes further, claiming that IBM has done nothing to show that its original Jeopardy!-playing breakthrough can yield results in the real world. “We have no evidence that IBM is able to take that narrow success and replicate it in broader settings,” he says. Of the box of tricks that is now sold under the Watson name, he adds: “I’m not aware of a single, super-exciting app.”

    To IBM, though, such complaints are beside the point. “Everything we brand Watson analytics is very high-end AI,” says Mr Kelly, involving “machine learning and high-speed unstructured data”. Five years after Jeopardy! the system has evolved far beyond its original set of tricks, adding capabilities such as image recognition to expand greatly the range of real-world information it can consume and process.

    Adopting the system
    This argument may not matter much if the Watson brand lives up to its promise. It could be self-fulfilling if a number of early customers adopt the technology and put in the work to train the system to work in their industries, something that would progressively extend its capabilities.

    Another challenge for early users of Watson has been knowing how much trust to put in the answers the system produces. Its probabilistic approach makes it very human-like, says Ms Chin at MD Anderson. Having been trained by experts, it tends to make the kind of judgments that a human would, with the biases that implies.
    In the business world, a brilliant machine that throws out an answer
    to a problem but cannot explain itself will be of little use, says Mr Hammond. “If you walk into a CEO’s office and say we need to shut down three factories and sack people, the first thing the CEO will say is: ‘Why?’” He adds: “Just producing a result isn’t enough.”
    IBM’s attempts to make the system more transparent, for instance by using a visualisation tool called WatsonPaths to give a sense of how it reached a conclusion, have not gone far enough, he adds.
    Mr Kelly says a full audit trail of Watson’s decision-making is embedded in the system, even if it takes a sophisticated user to understand it. “We can go back and figure out what data points Watson connected” to reach its answer, he says.

    He also contrasts IBM with other technology companies like Google and Facebook, which are using AI to enhance their own services or make their advertising systems more effective. IBM is alone in trying to make the technology more transparent to the business world, he argues: “We’re probably the only ones to open up the black box.”
    Even after the frustrations of wrestling with Watson, customers like MD Anderson still believe it is better to be in at the beginning of a new technology.
    “I am still convinced that the capability can be developed to what we thought,” says Ms Chin. Using the technology to put the reasoning capabilities of the world’s oncology experts into the hands of other doctors could be far-reaching: “The way Amazon did for retail and shopping, it will change what care delivery looks like.”
    Ms Chin adds that Watson will not be the only reasoning engine that is deployed in the transformation of healthcare information. Other technologies will be needed to complement it, she says.
    Five years after Watson’s game show gimmick, IBM has finally succeeded in stirring up hopes of an AI revolution in business. Now, it just has to live up to the promises.

    Source: Financial Times

  • Big Data Predictions for 2016

    A roundup of big data and analytics predictions and pontifications from several industry prognosticators.

    At the end of each year, PR folks from different companies in the analytics industry send me predictions from their executives on what the next year holds. This year, I received a total of 60 predictions from a record 17 companies. I can't laundry-list them all, but I can and did put them in a spreadsheet (irony acknowledged) to determine the broad categories many of them fall in. And the bigger of those categories provide a nice structure to discuss many of the predictions in the batch.

    Predictions streaming in
    MapR CEO John Shroeder, whose company just added its own MapR Streams component to its Hadoop distribution, says "Converged Approaches [will] Become Mainstream" in 2016. By "converged," Schroeder is alluding to the simultaneous use of operational and analytical technologies. He explains that "this convergence speeds the 'data to action' cycle for organizations and removes the time lag between analytics and business impact."

    The so-called "Lambda Architecture" focuses on this same combination of transactional and analytical processing, though MapR would likely point out that a "converged" architecture co-locates the technologies and avoids Lambda's approach of tying the separate technologies together.

    Whether integrated or converged, Phu Hoang, the CEO of DataTorrent predicts 2016 will bring an ROI focus to streaming technologies, which he summarizes as "greater enterprise adoption of streaming analytics with quantified results." Hoang explains that "while lots of companies have already accepted that real-time streaming is valuable, we'll see users looking to take it one step further to quantify their streaming use cases."

    Which industries will take charge here? Hoang says "FinTech, AdTech and Telco lead the way in streaming analytics." That makes sense, but I think heavy industry is, and will be, in a leadership position here as well.

    In fact, some in the industry believe that just about everyone will formulate a streaming data strategy next year. One of those is Anand Venugopal of Impetus Technologies, who I spoke with earlier this month. Venugopa, in fact, feels that we are within two years of streaming data becoming looked upon just another data source.

    Internet of predicted things
    It probably won't shock you that the Internet of Things (IoT) was a big theme in this year's round of predictions. Quentin Gallivan, Pentaho's CEO, frames the thoughts nicely with this observation: "Internet of Things is getting real!" Adam Wray, CEO at Basho, quips that "organizations will be seeking database solutions that are optimized for the different types of IoT data." That might sound a bit self-serving, but Wray justifies this by reasoning that this will be driven by the need to "make managing the mix of data types less operationally complex." That sounds fair to me.

    Snehal Antani, CTO at Splunk, predicts that "Industrial IoT will fundamentally disrupt the asset intelligence industry." Suresh Vasudevan, the CEO of Nimble Storage proclaims "in 2016 the IoT invades the datacenter." That may be, but IoT technologies are far from standardized, and that's a barrier to entry for the datacenter. Maybe that's why the folks at DataArt say "the IoT industry will [see] a year of competition, as platforms strive for supremacy." Maybe the data center invasion will come in 2017, then.

    Otto Berkes, CTO at CA Technologies, asserts that "Bitcoin-born Blockchain shows it can be the storage of choice for sensors and IoT." I hardly fancy myself an expert on blockchain technology, so I asked CA for a little more explanation around this one. A gracious reply came back, explaining that "IoT devices using this approach can transact directly and securely with each other...such a peer-to-peer configuration can eliminate potential bottlenecks and vulnerabilities." That helped a bit, and it incidentally shines a light on just how early-stage IoT technology still is, with respect to security and distributed processing efficiencies.

    Growing up
    Though admittedly broad, the category with the most predictions centered on the theme of value and maturity in Big Data products supplanting the fascination with new features and products. Essentially, value and maturity are proxies for the enterprise-readiness of Big Data platforms.

    Pentaho's Gallivan says that "the cool stuff is getting ready for prime time." MapR's Schroeder predicts "Shiny Object Syndrome Gives Way to Increased Focus on Fundamental Value," and qualifies that by saying "...companies will increasingly recognize the attraction of software that results in business impact, rather than focusing on raw big data technologies." In a related item, Schroeder predicts "Markets Experience a Flight to Quality," further stating that "...investors and organizations will turn away from volatile companies that have frequently pivoted in their business models."

    Sean Ma, Trifacta's Director of Product Management, looking at the manageability and tooling side of maturity, predicts that "Increasing the amount of deployments will force vendors to focus their efforts on building and marketing management tools." He adds: "Much of the capabilities in these tools...will need to replicate functionality in analogous tools from the enterprise data warehouse space, specifically in the metadata management and workflow orchestration." That's a pretty bold prediction, and Ma's confidence in it may indicate that Trifacta has something planned in this space. But even if not, he's absolutely right that this functionality is needed in the Big Data world. In terms of manageability, Big Data tooling needs to achieve not just parity with data warehousing and BI tools, but needs to surpass that level.

    The folks at Signals say "Technology is Rising to the Occasion" and explain that "advances in artificial intelligence and an understanding [of] how people work with data is easing the collaboration between humans and machines necessary to find meaning in big data." I'm not sure if that is a prediction, or just wishful thinking, but it certainly is the way things ought to be. With all the advances we've made in analyzing data using machine learning and intelligence, we've left the process of sifting through the output a largely manual process.

    Finally, Mike Maciag, the COO at AltiScale, asserts this forward-looking headline: "Industry standards for Hadoop solidify." Maciag backs up his assertion by pointing to the Open Data Platform initiative (ODPi) and its work to standardize Hadoop distributions across vendors. ODPi was originally anchored by Hortonworks, with numerous other companies, including AltiScale, IBM and Pivotal, jumping on board. The organization is now managed under the auspices of the Linux Foundation.

    Artificial flavor
    Artificial Intelligence (AI) and Machine Learning (ML) figured prominently in this year's predictions as well. Splunk's Antani reasons that "Machine learning will drastically reduce the time spent analyzing and escalating events among organizations." But Lukas Biewald, Founder and CEO of Crowdflower insists that "machines will automate parts of jobs -- not entire jobs." These two predictions are not actually contradictory. I offer both of them, though, to point out that AI can be a tool without being a threat.

    Be that as it may, Biewald also asserts that "AI will significantly change the business models of companies today." He expands on this by saying "legacy companies that aren't very profitable and possess large data sets may become more valuable and attractive acquisition targets than ever." In other words, if companies found gold in their patent portfolios previously, they may find more in their data sets, as other companies acquire them to further their efforts in AI, ML and predictive modeling.

    And more
    These four categories were the biggest among all the predictions but not the only ones, to be sure. Predictions around cloud, self-service, flash storage and the increasing prominence of the Chief Data Officer were in the mix as well. A number of predictions that stood on their own were there too, speaking to issues as far-reaching as salaries for Hadoop admins to open source, open data and container technology.

    What's clear from almost all the predictions, though, is that the market is starting to take basic big data technology as a given, and is looking towards next-generation integration, functionality, intelligence, manageability and stability. This implies that customers will demand certain baseline data and analytics functionality to be part of most technology solutions going forwards. And that's a great sign for everyone involved in Big Data.

    Source: ZDNet


  • Bol.com: machine learning om vraag en aanbod beter bij elkaar te brengen

    0cd4fbcf0a4f81814f388a75109da149ca643f45Een online marktplaats is een concept dat e-commerce in toenemende mate blijft adopteren. Naast consumer-to-consumer marktplaatsen zoals Marktplaats.nl, zijn er uiteraard ook business-to-consumer marktplaatse waarbij een online platform de vraag van consumenten en het aanbod van leveranciers bij elkaar brengt.

    Sommige marktplaatsen hebben geen eigen assortiment: hun aanbod bestaat voor 100 procent uit aangesloten leveranciers, denk bijvoorbeeld aan Alibaba. Bij Amazon bedraagt het aandeel van eigen producten 50 procent. Ook bol.com heeft een eigen marktplaatsen: ’Verkopen via Bol.com’. Deze draagt bij aan miljoenen extra artikelen in het assortiment van Bol.com.

    Bewaken van contentkwaliteit

    Er komt veel kijken bij het managen van zo’n marktplaats. Het doel is duidelijk: ervoor zorgen dat de vraag en het aanbod zo snel mogelijk bij elkaar komen, zodat de klant direct een aantal producten krijgt aangeboden die voor hem relevant zijn. En met miljoenen klanten aan de ene kant en miljoenen producten van duizenden leveranciers aan de andere kant, is dat natuurlijk een hele klus.

    Jens legt uit: “Het begint bij de standaardisatie van informatie aan zowel de vraag- als de aanbodkant. Bijvoorbeeld, als je als leverancier een cd van Tsjaikovski of een bril van Dolce & Gabbana bij bol.com wilt aanbieden, dan zijn er vele schrijfwijzen mogelijk. Voor een verkoopplatform als ‘Verkopen via bol.com’ is de kwaliteit van de data cruciaal. Het in stand houden van de kwaliteit van de content is dus een van de uitdagingen.

    Aan de andere kant van de transactie zijn er natuurlijk klanten van bol.com die ook allerlei variaties van termen, zoals de namen van merken, in het zoekveld intypen. Daarnaast wordt er in toenemende mate gezocht op generieke termen als ‘cadeau voor huwelijk’ of ‘spullen voor een feestje’.

    Vraag en aanbod bij elkaar brengen

    Naarmate het assortiment groter wordt, wat het geval is, en de klanten steeds ‘generieker’ gaan zoeken, is het steeds uitdagender om een match te maken en relevantie hoog te houden. Door het volume van deze ongestructureerde data en het feit dat ze realtime geanalyseerd moeten worden, kun je die match niet met de hand maken. Je moet hiervoor de data slim kunnen inzetten. En dat is een van de activiteiten waar het customer intelligence team van bol.com, een onderdeel van customer centric selling-afdeling, mee bezig is.

    Jens: “De truc is om het gedrag van klanten op de website te vertalen naar contentverbeteringen. Door de woorden (en woordcombinaties) die klanten gebruiken om artikelen te zoeken en producten die uiteindelijk gekocht zijn te analyseren en met elkaar te matchen, kunnen synoniemen voor desbetreffende producten worden gecreëerd. Dankzij deze synoniemen gaat de relevantie van de zoekresultaten omhoog en help je dus de klant om het product sneller te vinden. Bovendien snijdt het mes snijdt aan twee kanten, omdat tegelijkertijd de kwaliteit van de productcatalogus wordt verbeterd. Denk hierbij aan verfijning van verschillende kleurbeschrijvingen (WIT, Wit, witte, white, etc.).

    Algoritmes worden steeds slimmer

    Het bovenstaande proces verloopt nog semi-automatisch (met terugwerkende kracht), maar de ambitie is om het in de toekomst volledig geautomatiseerd plaats te laten vinden. Om dat te kunnen doen worden er op dit moment stap voor stap machinelearningtechnieken geïmplementeerd. Als eerste is er geïnvesteerd in technologieën om grote volumes van ongestructureerde data zeer snel te kunnen verwerken. Bol.com bezit twee eigen datacenters met tientallen clusters.

    “Nu wordt er volop geëxperimenteerd om deze clusters in te zetten voor het verbeteren van het zoekalgoritme, het verrijken van de content en standaardisatie”, geeft Jens aan. “En dat levert uitdagingen op. Immers, als je doorslaat in standaardisatie, dan kom je in een selffulfilling prophecy terecht. Maar gelukkig nemen de algoritmes het beetje bij beetje over en worden ze steeds slimmer. Nu probeert het algoritme de zoekterm zelf aan een product te koppelen en legt het deze aan diverse interne specialisten voor. Concreet geformuleerd: de specialisten krijgen te zien dat ‘de kans 75 procent is dat de klant dit bedoelt’. Die koppeling wordt vervolgens handmatig gevalideerd. De terugkoppeling van deze specialisten over een voorgestelde verbetering levert belangrijke input voor algoritmes om informatie nog beter te kunnen verwerken. Je ziet dat de algoritmes steeds beter hun werk doen.”

    Toch levert dit voor Jens en zijn team een volgende kwestie op: waar leg je de grens waarbij het algoritme zelf de beslissing kan nemen? Is dat bij 75 procent? Of moet alles onder de 95 procent door menselijk inzicht gevalideerd worden?

    Een betere winkel maken voor onze klanten met big data

    Drie jaar geleden was big data een onderwerp waarover voornamelijk in PowerPoint‑slides gesproken werd. Tegenwoordig hebben vele (grotere) e-commercebedrijven een eigen Hadoop-cluster. Het is de volgende stap om met big data de winkel écht beter te maken voor klanten en bij bol.com wordt daar hard aan gewerkt. In 2010 is bij het bedrijf overgestapt van ‘massamediale’ naar ‘persoonlijk relevante’ campagnevoering, waarbij er in toenemende mate gepoogd wordt om op basis van diverse ‘triggers’ een persoonlijke boodschap aan de klant te bieden, real-time.

    Die triggers (zoals bezochte pagina’s of bekeken producten) wegen steeds zwaarder dan historische gegevens (wie is de klant en wat heeft deze in verleden gekocht).

    “Als je inzicht krijgt in relevante triggers en niet‑relevante weglaat”, stelt Jens, “dan kun je de consument beter bedienen door bijvoorbeeld de meest relevante review te tonen, een aanbieding te doen of een selectie vergelijkbare producten te maken. Op deze manier sluit je beter aan bij de klantreis en is de kans steeds groter dat de klant bij je vind wat hij zoekt.”

    En dat doet bol.com door eerst, op basis van het gedrag op de website, maar ook op basis van de beschikbare voorkeuren van de klant, op zoek te gaan naar de relevante triggers. Nadat deze aan de content zijn gekoppeld, zet bol.com A/B‑testen in om de conversie te analyseren om het uiteindelijk wel of niet definitief door te voeren. Immers, elke wijziging moet resulteren in hogere relevantie.

    Er komen uiteraard verschillende technieken bij kijken om ongestructureerde data te kunnen analyseren en hier zijn zowel slimme algoritmes als menselijk inzicht voor nodig. Jens: “Gelukkig zijn bij ons niet alleen de algoritmes zelflerend, maar ook het bedrijf, dus het proces gaat steeds sneller en beter.”


    Outsourcen of alles in-house doen is een strategische beslissing. Bol.com koos voor het laatste. Uiteraard wordt er nog op ad-hocbasis gebruikgemaakt van de kennis uit de markt als dat helpt om processen te versnellen. Data-analisten en data scientists zijn een belangrijk onderdeel van het groeiende customer centric selling team.

    Het verschil spreekt voor zich: data-analisten zijn geschoold in ‘traditionele’ tools als SPSS en SQL en doen analysewerk. Data scientists hebben een grotere conceptuele flexibiliteit en kunnen daarnaast programmeren in onder andere Java, Python en Hive. Uiteraard zijn er doorgroeimogelijkheden voor ambitieuze data-analisten, maar toch wordt het steeds lastiger om data scientists te vinden.

    Hoewel er in de markt keihard gewerkt wordt om het aanbod uit te breiden; hebben we hier vooralsnog met een kleine, selecte groep professionals te maken. Bol.com doet er alles aan om de juiste mensen te werven en op te leiden. Eerst wordt een medewerker met het juiste profiel binnengehaald; denk aan iemand die net is afgestudeerd in artificial intelligence, technische natuurkunde of een andere exacte wetenschap. Vervolgens wordt deze kersverse data scientist onder de vleugels van een van de ervaren experts uit het opleidingsteam van bol.com genomen. Training in computertalen is hier een belangrijk onderdeel van en verder is het vooral learning-by-doing.

    Mens versus machine

    Naarmate de algoritmes steeds slimmer worden en artificial‑intelligencetechnologieën steeds geavanceerder, zou je denken dat het tekort aan data scientists tijdelijk is: de computers nemen het over.

    Dat is volgens Jens niet het geval: “Je zult altijd behoefte blijven houden aan menselijk inzicht. Alleen, omdat de machines steeds meer routinematig en gestandaardiseerd analysewerk overnemen, kun je steeds meer gaan doen. Bijvoorbeeld, niet de top 10.000 zoektermen verwerken, maar allemaal. Feitelijk kun je veel meer de diepte én de breedte in. En dus is de impact van jouw werk op de organisatie vele malen groter. Het resultaat? De klant wordt beter geholpen en hij bespaart tijd omdat hij steeds relevantere informatie krijgt en daarom meer engaged is. En brengt ons ook steeds verder in onze ambitie om onze klanten de beste winkel te bieden die er bestaat.”

    Klik hiervoor het hele rapport.

    Source: Marketingfacts

  • Data analytics: From studying the past to forecasting the future

    Data analytics: From studying the past to forecasting the future

    To compete in today's competitive market place, it is critical that executives have access to an accurate and holistic view of their business. The key element to sifting through a massive amount of data to gain this level of transparency is a robust analytics solution. As technology is constantly evolving, so too are data analytics solutions. 

    In this blog, three types of data analytics and the emerging role of artificial intelligence (AI) in processing the data are discussed:

    Descriptive analytics

    As the name suggests, descriptive analytics describe what happened in the past. This is accomplished by taking raw historical, whether from five minutes or five years ago, and presenting an easy-to-understand, accurate view of past patterns or behaviors. By understanding what happened, we can better understand how it might influence the future. Many businesses use descriptive analytics to understand customer buying patterns, sales year-over-year, historical cost-to-serve, supply chain patterns, financials, and much more.

    Predictive analytics

    This is the ability to accurately forecast or predict what could happen moving forward. Understanding the likelihood of future outcomes enables the company to better prepare based on probabilities. This is accomplished by taking the historical data from your various silos such as CRM, ERP, and POS, and combining it into one single version of the truth. This enables users to identify trends in sales, forecast demands on the supply chain, purchasing and inventory level based on a number of variables. 

    Prescriptive Analytics

    This solution is the newest evolution in data analytics. It takes previous iterations to the next level by revealing possible outcomes and prescribing courses of actions. In addition, this solution will also show why it will happen. Prescriptive analytics answers the question: What should we do? Although this is a relatively new form of analytics, larger retail companies are successfully using it to optimize customer experience, production, purchasing and inventory in the supply chain to make sure the right products are being delivered at the right time. In the stock market, prescriptive analytics can recommend where to buy or sell to optimize your profit.

    All three categories of analytics work together to provide the guidance and intelligence to optimize business performance.

    Where AI fits in

    As technology continues to advance, AI will become a game-changer by making analytics substantially more powerful. A decade ago, analytics solutions only provided descriptive analytics.  As the amount of data generated increased, solutions started to develop predictive analytics. As AI evolves, data analytics solutions are also changing and becoming more sophisticated. BI software vendors are currently posturing to be the first to market with an AI offering to enhance prescriptive analytics. 

    AI can help sales-based organizations by providing specific recommendations that sales representatives can act on immediately. Insight into customer buying patterns will allow prescriptive analytics to suggest products to bundle which ultimately leads to an increase in the size of an order, reduce delivery costs and number of invoices.

    Predictive ordering has enabled companies to send products you need before you order them. For example, some toothbrush or razor companies will send replacement heads in this way. They predict when the heads will begin to fail and order the replacement for you. 

    Improving data analytics for your business

    If you are considering enhancing your data analytics capability and adding artificial intelligence, we encourage you to seek out a software vendor that offers you industry-matched data analytics that is easy and intuitive for everyone to use. This means dashboards, scorecards, alerts developed with the standard KPIs for your industry, pre-built.

    Collaborating to customize the software to fit your business and augmenting with newer predictive analytics and machine learning-based AI happens next.

    Source: Phocas Software

  • Dealing with data preparation: best practices - Part 1

    Dealing with data preparation: best practices - Part 1

    IBM is reporting that data quality challenges are a top reason why organizations are reassessing (or ending) artificial intelligence (AI) and business intelligence (BI) projects.

    Arvind Krishna, IBM’s senior vice president of cloud and cognitive software, stated in a recent interview with the Wall Street Journal that 'about 80% of the work with an AI project is collecting and preparing data. Some companies aren’t prepared for the cost and work associated with that going in. And you say: ‘Hey, wait a moment, where’s the AI? I’m not getting the benefit.’ And you kind of bail on it'.

    Many businesses are not prepared for the cost and effort associated with data preparation (DP) when starting AI and BI projects. To compound matters, hundreds of data and record types and billions of records are often involved in a project’s DP effort.

    However, data analytics projects are increasingly imperative to organizational success in the digital economy, hence the need for DP solutions.

    What is AI/BI data preparation?

    Gartner defines data preparation as 'an iterative and agile process for exploring, combining, cleaning, and transforming raw data into curated datasets for data integration, data science, data discovery, and analytics/business intelligence (BI) use cases'. 

    A 2019 International Data Corporation (IDC) study reports that data workers spend a remarkable time each week on data-related activities: 33% on data preparation compared to 32 % on analytics (and, sadly, just 13% on data science). The top challenge cited by more than 30% of all data workers in this study was that 'too much time is spent on data preparation'.

    The variety of data sources, the multiplicity of data types, the enormity of data volumes, and the numerous uses for data analytics and business intelligence, all result in multiple data sources and complexity for each project. Consequently, today’s data workers often use numerous tools for DP success.

    Capabilities needed in data preparation tools

    Evidence in the Gartner Research report Market Guide for Data Preparation Tools shows that data preparation time and reporting of information discovered during DP can be reduced by more than half when DP tools are implemented.

    In the same research report, Gartner lists details of vendors and DP tools. The analyst firm predicts that the market for DP solutions will reach $1 billion this year, with nearly a third (30%) of IT organizations employing some type of self-service data preparation tool set.

    Another Gartner Research Circle Survey on data and analytics trends revealed that over half (54%) of respondents want and need to automate their data preparation and cleansing tasks during the next 12 to 24 months.

    To accelerate data understandings and improve trust, data preparation tools should have certain key capabilities, including the ability to:

    • Extract and profile data. Typically, a data prep tool uses a visual environment that enables users to extract interactively, search, sample, and prepare data assets.
    • Create and manage data catalogs and metadata. Tools should be able to create and search metadata as well as track data sources, data transformations, and user activity against each data source. It should also keep track of data source attributes, data lineage, relationships, and APIs. All of this enables access to a metadata catalog for data auditing, analytics/BI, data science, and other operational use cases.
    • Support basic data quality and governance features. Tools must be able to integrate with other tools that support data governance/stewardship and data quality criteria.

    Keep an eye out for part 2 of this article, where ake a deeper dive into best practices for data preparation.

    Author: Wayne Yaddow

    Source: TDWI

  • Dealing with data preparation: best practices - Part 2

    Dealing with data preparation: best practices - Part 2

    If you haven't read yesterday's part 1 of this article, be sure to check it out before reading this article.

    Getting started with data preparation: best practices

    The challenge is getting good at DP. As a recent report by business intelligence pioneer Howard Dresner found, 64% of respondents constantly or frequently perform end-user DP, but only 12% reported they were very effective. Nearly 40% of data professionals spend half of their time prepping data rather than analyzing it.

    Following are a few of the practices that help assure optimal DP for your AI and BI projects. Many more can be found from data preparation service and product suppliers.

    Best practice 1: Decide which data sources are needed to meet AI and BI requirements

    Take these three general steps to data discovery:

    1. Identify the data needed to meet required business tasks.
    2. Identify potential internal and external sources of that data (and include its owners).
    3. Assure that each source will be available according to required frequencies.

    Best practice 2: Identify tools for data analysis and preparation

    It will be necessary to load data sources into DP tools so the data can be analyzed and manipulated. It’s important to get the data into an environment where it can be closely examined and readied for the next steps.

    Best practice 3: Profile data for potential and selected source data

    This is a vital (but often discounted) step in DP. A project must analyze source data before it can be properly prepared for downstream consumption. Beyond simple visual examination, you need to profile data, detect outliers, and find null values (and other unwanted data) among sources.

    The primary purpose of this profiling analysis is to decide which data sources are even worth including in your project. As data warehouse guru Ralph Kimball writes in his book, The Data Warehouse Toolkit , 'Early disqualification of a data source is a responsible step that can earn you respect from the rest of the team'.

    Best practice 4: Cleansing and screening source data

    Based on your knowledge of the end business analytics goal, experiment with different data cleansing strategies that will get the relevant data into a usable format. Start with a small, statistically-valid sample to iteratively experiment with different data prep strategies, refine your record filters, and discuss the results with business stakeholders.

    When discovering what seems to be a good DP approach, take time to rethink the subset of data you really need to meet the business objective. Running your data prep rules on the entire data set will be very time consuming, so think critically with business stakeholders about which entities and attributes you do and don’t need and which records you can safely filter out.

    Final thoughts

    Proper and thorough data preparation, conducted from the start of an AI/BI project, leads to faster, more efficient AI and BI down the line. DP steps and processes outlined here apply to whatever technical setup you are using, and they will get you better results.

    Note that DP is not a 'do once and forget' task. Data is constantly generated from multiple sources that may change over time, and the context of your business decisions will certainly change over time. Partnering with data preparation solution providers is an important consideration for the long-term capability of your DP infrastructure.

    Author: Wayne Yaddow

    Source: TDWI

  • Digitale technologieën leveren Europees bedrijfsleven komende twee jaar 545 miljard euro op

    925609982sEuropese bedrijven kunnen dankzij het toepassen van digitale tools en technologieën een omzetstijging van 545 miljard euro behalen in de komende twee jaar. Voor Nederlandse bedrijven ligt dit bedrag op 23,5 miljard euro. Dat blijkt uit een onderzoek van Cognizant in samenwerking met Roubini Global Economics onder ruim 800 Europese bedrijven.
    Het onderzoek The Work Ahead – Europe’s Digital Imperative maakt onderdeel uit van een wereldwijd onderzoek waarin het veranderende karakter van werk in het digitale tijdperk wordt onderzocht. De resultaten tonen aan dat organisaties die het meest proactief zijn in het dichter bij elkaar brengen van de fysieke en virtuele wereld, de grootste kans hebben om meer omzet te behalen.
    Omzetpotentieel benutten
    Leidinggevenden geven aan dat technologieën als Artificial Intelligence (AI), Big Data en blockchain een bron kunnen zijn voor nieuwe businessmodellen en inkomststromen, veranderende klantrelaties en lagere kosten. Sterker nog, de ondervraagden verwachten dat digitale technologieën een positief effect van 8,4 procent zullen hebben op de omzet tussen nu en 2018.
    Digitalisering kan voor zowel kostenefficiëntie als omzetstijging zorgen. Door bijvoorbeeld intelligent process automation (IPA) toe te passen – waarbij software-robots routinetaken overnemen – kunnen bedrijven kosten besparen in de middle en backoffice. Uit de analyse blijkt dat de impact van digitale transformatie op omzet en kostenbesparing in de onderzochte industrieën (retail, financiële diensten, verzekeringen1, maakindustrie en life sciences) uitkomt op 876 miljoen euro in 2018.
    Nog steeds achterblijvers op digitaal gebied
    Europese executives verwachten dat een digitale economie gestimuleerd zal worden door een combinatie van data, algoritmes, software-robots en connected devices. Gevraagd naar welke technologie de grootste invloed zal hebben op het werk in 2020, komt Big Data als winnaar naar voren. Maar liefst 99 procent van de respondenten noemt deze technologie. Opvallend is dat AI vlak daarna met 97 procent op een tweede plek eindigt; respondenten beschouwen AI als meer dan een hype. Sterker nog, de verwachting is dat AI een centrale plek zal innemen in het toekomstige werk in Europa.
    Aan de andere kant kunnen late adopters een gezamenlijk verlies van 761 miljard euro verwachten in 2018, zo blijkt uit het onderzoek.
    Een derde van de ondervraagde managers geeft aan dat hun werkgever in hun ogen niet beschikt over de kennis en kwaliteiten om de juiste digitale strategie in te voeren of zelfs geen idee heeft van wat er gedaan moet worden. 30 procent van de ondervraagden is van mening dat hun leidinggevenden te weinig investeren in nieuwe technologieën, terwijl 29 procent terughoudendheid ondervindt in het toepassen van nieuwe manieren van werken.
    De belangrijkste obstakels voor bedrijven om de overstap te maken naar digitaal zijn angst voor beveiligings-issues (24%), budgetbeperkingen (21%) en een gebrek aan talent (14%).
    Euan Davis, European Head of the Centre for the Future of Work bij Cognizant, licht toe: “Om de noodzakelijke stap te kunnen maken naar digitaal, moet het management proactief zijn en hun organisatie voorbereiden op toekomstig werk. Langzame innovatierondes en onwil om te experimenteren zijn de doodsteek voor organisaties om digitale mogelijkheden goed te kunnen benutten. Het beheren van de digitale economie is een absolute noodzaak voor organisaties. Bedrijven die geen prioriteit geven aan het verdiepen, verbreden, versterken of verbeteren van hun digitale voetafdruk, spelen bij voorbaat een verloren wedstrijd.”
    Over het onderzoek
    Uitkomsten zijn gebaseerd op een wereldwijd onderzoek onder 2.000 executives in verschillende industrieën, 250 middenmanagers verantwoordelijk voor andere werknemers, 150 MBA-studenten van grote universiteiten wereldwijd en 50 futuristen (journalisten, academici en auteurs). Het onderzoek onder executives en managers is in 18 landen uitgevoerd in het Engels, Arabisch, Frans, Duits, Japans en Chinees. Executives zijn daarbij telefonisch geïnterviewd, managers via een online vragenlijst. De MBA-studenten en futuristen zijn in het Engels ondervraagd via telefonische interviews (MBA studenten in 15 landen, futuristen in 10 landen). The Work Ahead – Europe’s Digital Imperative bevat de 800 reacties van het Europese onderzoek onder executives en managers. Meer details zijn te vinden in Work Ahead: Insights to Master the Digital Economy.
    Source: emerce.nl, 28 november 2016
  • Exploring the risks of artificial intelligence

    shutterstock 117756049“Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.”

    These words, articulated by Neil Armstrong at a speech to a joint session of Congress in 1969, fit squarely into most every decade since the turn of the century, and it seems to safe to posit that the rate of change in technology has accelerated to an exponential degree in the last two decades, especially in the areas of artificial intelligence and machine learning.

    Artificial intelligence is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation. This reaction alone is predictable, but can we really predict the associated risks involved?

    It seems we’re all trying to get a grip on potential reality, but information overload (yet another side affect that we’re struggling to deal with in our digital world) can ironically make constructing an informed opinion more challenging than ever. In the search for some semblance of truth, it can help to turn to those in the trenches.

    In my continued interview with over 30 artificial intelligence researchers, I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years.

    Some results from the survey, shown in the graphic below, included 33 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).

    Two “greatest” risks bubbled to the top of the response pool (and the majority are not in the autonomous robots’ camp, though a few do fall into this one). According to this particular set of minds, the most pressing short- and long-term risks is the financial and economic harm that may be wrought, as well as mismanagement of AI by human beings.

    Dr. Joscha Bach of the MIT Media Lab and Harvard Program for Evolutionary Dynamics summed up the larger picture this way:

    “The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.”

    Essentially, the introduction of AI may act as a catalyst that exposes and speeds up the imperfections already present in our society. Without a conscious and collaborative plan to move forward, we expose society to a range of risks, from bigger gaps in wealth distribution to negative environmental effects.

    Leaps in AI are already being made in the area of workplace automation and machine learning capabilities are quickly extending to our energy and other enterprise applications, including mobile and automotive. The next industrial revolution may be the last one that humans usher in by their own direct doing, with AI as a future collaborator and – dare we say – a potential leader.

    Some researchers believe it’s a matter of when and not if. In Dr. Nils Nilsson’s words, a professor emeritus at Stanford University, “Machines will be singing the song, ‘Anything you can do, I can do better; I can do anything better than you’.”

    In respect to the drastic changes that lie ahead for the employment market due to increasingly autonomous systems, Dr. Helgi Helgason says, “it’s more of a certainty than a risk and we should already be factoring this into education policies.”

    Talks at the World Economic Forum Annual Meeting in Switzerland this past January, where the topic of the economic disruption brought about by AI was clearly a main course, indicate that global leaders are starting to plan how to integrate these technologies and adapt our world economies accordingly – but this is a tall order with many cooks in the kitchen.

    Another commonly expressed risk over the next two decades is the general mismanagement of AI. It’s no secret that those in the business of AI have concerns, as evidenced by the $1 billion investment made by some of Silicon Valley’s top tech gurus to support OpenAI, a non-profit research group with a focus on exploring the positive human impact of AI technologies.

    “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” is the parallel message posted on OpenAI’s launch page from December 2015. How we approach the development and management of AI has far-reaching consequences, and shapes future society’s moral and ethical paradigm.

    Philippe Pasquier, an associate professor at Simon Fraser University, said “As we deploy more and give more responsibilities to artificial agents, risks of malfunction that have negative consequences are increasing,” though he likewise states that he does not believe AI poses a high risk to society on its own.

    With great responsibility comes great power, and how we monitor this power is of major concern.

    Dr. Pei Wang of Temple University sees major risk in “neglecting the limitations and restrictions of hot techniques like deep learning and reinforcement learning. It can happen in many domains.” Dr. Peter Voss, founder of SmartAction, expressed similar sentiments, stating that he most fears “ignorant humans subverting the power and intelligence of AI.”

    Thinking about the risks associated with emerging AI technology is hard work, engineering potential solutions and safeguards is harder work, and collaborating globally on implementation and monitoring of initiatives is the hardest work of all. But considering all that’s at stake, I would place all my bets on the table and argue that the effort is worth the risk many times over.

    Source: Tech Crunch

  • Gaining control of big data with the help of NVMe

    Gaining control of big data with the help of NVMe

    Every day there is an unfathomable amount of data, nearly 2.5 quintillion bytes, being generated all around us. Part of the data being created we see every day, such as pictures and videos on our phones, social media posts, banking and other apps.

    In addition to this, there is data being generated behind the scenes by ubiquitous sensors and algorithms, whether that’s to process quicker transactions, gain real-time insights, crunch big data sets or to simply meet customer expectations. Traditional storage architectures are struggling to keep up with all this data creation, leading IT teams to investigate new solutions to keep ahead and take advantage of the data boom.

    Some of the main challenges are understanding performance, removing data throughput bottlenecks and being able to plan for future capacity. Architecture can often lock businesses in to legacy solutions, and performance needs can vary and change as data sets grow.

    Architectures designed and built around NVMe(non-volatile memory express) can provide the perfect balance, particularly for data-intensive applications that demand fast performance. This is extremely important for organizations that are dependent on speed, accuracy and real-time data insights.

    Industries such as healthcare, autonomous vehicles, artificial intelligence(AI)/machine learning(ML) and Genomics are at the forefront of the transition to high performance NVMe storage solutions that deliver fast data access for high performance computing systems that drive new research and innovations.


    With traditional storage architectures, detailed genome analysis can take upwards of five days to complete, which makes sense considering an initial analysis of one person’s genome produces approximately 300GB - 1TB of data, and a single round of secondary analysis on just one person’s genome can require upwards of 500TB storage capacity. However, with an NVMe solution implemented it’s possible to get results in just one day.

    In a typical study, genome research and life sciences companies need to process, compare and analyze the genomes of between 1,000 and 5,000 people per study. This is a huge amount of data to store, but it’s imperative that it’s done. These studies are working toward revolutionary scientific and medical advances, looking to personalize medicine and provide advanced cancer treatments. This is only now becoming possible thanks to the speed that NVMe enables researchers to explore and analyze the human genome.

    Autonomous vehicles

    A growing trend in the tech industry is the one of autonomous vehicles. Self-driving cars are the next big thing, and various companies are working tirelessly to perfect the idea. In order to function properly, these vehicles need very fast storage to accelerate the applications and data that ‘drive’ autonomous vehicle development. Core requirements for autonomous vehicle storage include:

    • Must have a high capacity in a small form factor
    • Must be able to accept input data from cameras and sensors at “line rate” – AKA have extremely high throughput and low latency
    • Must be robust and survive media or hardware failures
    • Must be “green” and have minimal power footprint
    • Must be easily removable and reusable
    • Must use simple but robust networking

    What kind of storage meets all these requirements? That’s right – NVMe.

    Artificial Intelligence

    Artificial Intelligence (AI) is gaining a lot of traction in a variety of industries varying from financial to manufacturing, and beyond. In financial, AI does things like predict investment trends. In manufacturing, AI-based image recognition software checks for defects during product assembly. Wherever it’s used, AI needs a high level of computing power, coupled with a high-performance and low-latency architecture in order to enable parallel processing power of data in real-time.

    Once again, NVMe steps up to the plate, providing the speed and processing power that is critical during training and inference. Without NVMe to prevent bottlenecks and latency issues, these stages can take much, much longer. Which, in turn, can lead to the temptation to take shortcuts, causing software to malfunction or make incorrect decisions down the line.

    The rapid increase of data creation has put traditional storage architectures under high pressure due to its lack of scalability and flexibility, both of which are required to fulfill future capacity and performance requirements. This is where NVMe comes in, breaking the barriers of existing designs by offerings unanticipated density and performance. The breakthroughs that NVMe is able to offer contain the requirements needed to help manage and maintain the data boom.

    Author: Ron Herrmann

    Source: Dataversity


  • Hoe werkt augmented intelligence?

    artificial-intelligenceComputers en apparaten die met ons meedenken zijn al lang geen sciencefiction meer. Artificial intelligence (AI) is terug te vinden in wasmachines die hun programma aanpassen aan de hoeveelheid was en computerspellen die zich aanpassen aan het niveau van de spelers. Hoe kunnen computers mensen helpen slimmer te beslissen? Deze uitgebreide whitepaper beschrijft welke modellen in het analyseplatform HPE IDOL worden toegepast.

    Mathematische modellen zorgen voor menselijke maat

    Processors kunnen in een oogwenk een berekening uitvoeren waar mensen weken tot maanden mee bezig zouden zijn. Daarom zijn computers betere schakers dan mensen, maar slechter in poker waarin de menselijke maat een grotere rol speelt. Hoe zorgt een zoek- en analyseplatform ervoor dat er meer ‘mens’ in de analyse terechtkomt? Dat wordt gerealiseerd door gebruik te maken van verschillende mathematische modellen.

    Analyses voor tekst, geluid, beeld en gezichten

    De kunst is om uit data actiegerichte informatie te verkrijgen. Dat lukt door patroonherkenning in te zetten op verschillende datasets. Daarnaast spelen classificatie, clustering en analyse een grote rol bij het verkrijgen van de juiste inzichten. Niet alleen teksten worden geanalyseerd, steeds vaker worden ook geluidsbestanden en beelden, objecten en gezichten geanalyseerd.

    Artificial intelligence helpt de mens

    De whitepaper beschrijft uitvoerig hoe patronen worden gevonden in tekst, audio en beelden. Hoe snapt een computer dat de video die hij analyseert over een mens gaat? Hoe wordt van platte beelden een geometrisch 3d-beeld gemaakt en hoe beslist een computer wat hij ziet? Denk bijvoorbeeld aan een geautomatiseerd seintje naar de controlekamer als het te druk is op een tribune of een file ontstaat. Hoe helpen theoretische modellen computers als mensen waarnemen en onze beslissingen ondersteunen? Dat en meer leest u in de whitepaper Augmented intelligence Helping humans make smarter decisions. Zie hiervoor AnalyticsToday

    Analyticstoday.nl, 12 oktober 2016

  • How AI is influencing web design

    How AI is influencing web design

    Artificial intelligence in web design is making a major impact. This is what to know about how it works and how effective it can be.

    When Alan Turing invented the first intelligent machine, few could have predicted that the advanced technology would become as widespread and ubiquitous as it is today.

    Since then, companies have adopted AI (artificial intelligence) for pretty much everything, from self-driving cars to medical technology to banking. We live in the age of big data, an age in which we use machines to collect and analyze massive amounts of data in a way that humans couldn’t do on their own. In many respects, the cognition of machines is already surpassing that of humans.

    With the explosion of the internet, AI has also become a critical element of web design. Artificial intelligence has helped with everything from the building and customization of websites and brands to the way users experience those websites themselves.

    Here are some of the ways AI is making web design increasingly sophisticated:

    AI designs websites

    Artificial design intelligence (ADI) tools are the building blocks of many of today’s websites. These days, ADI systems have evolved into effective tools with functional and attractive results. Wix and Bookmark, for example, offer popular automated website building tools with customizable options. Designers, developers, and everyday entrepreneurs no longer have to build websites from the ground up, nor do they need to spend hours choosing the perfect template. Instead, both Wix and Bookmark claim that websites can intelligently design themselves, using nothing more than the site’s name and the answers to a few quick questions.

    Not only does AI help engineer the web building process, but it’s also become the designer behind the brand names and logos that dominate a website’s home page. Companies are turning to artificial intelligence to automate their branding process, using AI tools like Tailor Brands to design their own customized logos in seconds. In this way, AI has made good web design more accessible and affordable for big companies and small-scale entrepreneurs alike.

    AI enhances user experience

    AI isn’t just changing web design on the developer end, it’s changing the way users experience websites, too. AI is the force behind the chatbots that offer conversation or assistance on many companies’ websites. While conversations with chatbots once felt frustrating, repetitive, and a little too robotic, more sophisticated AI-powered chatbots use natural language processing (NLP) to have more natural, authentic conversations and to genuinely “understand” their customers’ needs. Sephora’s chatbot Kik is one example of a powerful NLP chatbot that understands customers’ beauty needs and provides them with recommendations based on these needs.

    In addition to the practical value of chatbots, the prevalence of chatbots indicates an increasing shift towards customer-focused websites, ones that prioritize drawing customers in over getting their message out. With the emergence of AI chatbots, websites have transformed into customer engagement platforms, where customers can offer their feedback, ask for help, or find products or services suited to their preferences.

    AI analyzes results

    We’ve seen how AI has benefitted both website building and user experience. A third way AI is affecting web design is by making possible analytics tools that help companies analyze their results and refine their websites accordingly.

    By crunching down big data into analyzable numbers and patterns, predictive analytics tools like TensorFlow and Infosys Nia reveal real-time insights about what does and doesn’t work for website visitors and prospective customers. This enables businesses to understand which types of customers are drawn to their site, and to accommodate those visitors with a seamless user experience. Using results from AI-powered analytics platforms, web developers and designers are able to tweak and refine their site and make it increasingly user-friendly.

    AI in web design: where is it heading next?

    AI is already being used in web design to make site building and design easier and more accessible, to enhance UX and further user engagement, and to drive site improvement through big data analytics. As artificial intelligence becomes even more advanced, affordable, and widespread, it will continue to affect web design in ways we can only imagine. Will improved natural language processing make chatbots indistinguishable from human representatives? Will websites readily adapt, real-time, to users’ preferences and needs? Whatever happens, AI is already the new normal.

    Author: Diana Hope

    Source: SmartDataCollective

  • How artificial intelligence will shape the future of business

    How artificial intelligence will shape the future of business

    From the boardroom at the office to your living room at home, artificial intelligence (AI) is nearly everywhere nowadays. Tipped as the most disruptive technology of all time, it has already transformed industries across the globe. And companies are racing to understand how to integrate it into their own business processes.

    AI is not a new concept. The technology has been with us for a long time, but in the past, there were too many barriers to its use and applicability in our everyday lives. Now improvements in computing power and storage, increased data volumes and more advanced algorithms mean that AI is going mainstream. Businesses are harnessing its power to reinvent themselves and stay relevant in the digital age.

    The technology makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. It does this by processing large amounts of data and recognising patterns. AI analyses much more data than humans at a much deeper level, and faster.

    Most organisations can’t cope with the data they already have, let alone the data that is around the corner. So there’s a huge opportunity for organisations to use AI to turn all that data into knowledge to make faster and more accurate decisions.

    Customer experience

    Customer experience is becoming the new competitive battleground for all organisations. Over the next decade, businesses that dominate in this area will be the ones that survive and thrive. Analysing and interpreting the mountains of customer data within the organisation in real time and turning it into valuable insights and actions will be crucial.

    Today most organisations are using data only to report on what their customers did in the past. SAS research reveals that 93% of businesses currently cannot use analytics to predict individual customer needs.

    Over the next decade, we will see more organisations using machine learning to predict future customer behaviours and needs. Just as an AI machine can teach itself chess, organizations can use their existing massive volumes of customer data to teach AI what the next-best action for an individual customer should be. This could include what product to recommend next or which marketing activity is most likely to result in a positive response.

    Automating decisions

    In addition to improving insights and making accurate predictions, AI offers the potential to go one step further and automate business decision making entirely.

    Front-line workers or dependent applications make thousands of operational decisions every day that AI can make faster, more accurately and more consistently. Ultimately this automation means improving KPIs for customer satisfaction, revenue growth, return on assets, production uptime, operational costs, meeting targets and more.

    Take Shop Direct for example, which owns the Littlewoods and Very brands. This approach saw Shop Direct’s profits surge by 40%, driven by a 15.9% increase in sales from Very.co.uk. It uses AI from SAS to analyse customer data in real time and automate decisions to drive groundbreaking personalisation at an individual customer level.

    AI is here. It’s already being adopted faster than the arrival of the internet. And it’s delivering business results across almost every industry today. In the next decade, every successful company will have AI. And the effects on skills, culture and structure will deliver superior customer experiences.

    Author: Tiffany Carpenter

    Source: SAS

  • How to create a trusted data environment in 3 essential steps

    How to create a trusted data environment in 3 essential steps

    We are in the era of the information economy. Nowadays, more than ever, companies have the capabilities to optimize their processes through the use of data and analytics. While there are endless possibilities wjen it comes to data analysis, there are still challenges with maintaining, integrating, and cleaning data to ensure that it will empower people to take decisions.

    Bottom up, top down? What is the best?

    As IT teams begin to tackle the data deluge, a question often asked is: should this problem be approached from the bottom up or top down? There is no “one-size-fits-all” answer here, but all data teams need a high-level view to help you get a quick view of your data subject areas. Think of this high-level view as a map you create to define priorities and identify problem areas for your business within the modern day data-based economy. This map will allow you to set up a phased approach to optimize your most value contributing data assets.

    The high-level view unfortunately is not enough to turn your data into valuable assets. You also need to know the details of your data.

    Getting the details from your data is where a data profile comes into play. This profile tells you what your data is from the technical perspective. The high-level view (the enterprise information model), gives you the view from the business perspective. Real business value comes from the combination of both views. A transversal, holistic view on your data assets, allowing to zoom in or zoom out. The high-level view with technical details (even without the profiling) allows to start with the most important phase in the digital transformation: Discovery of your data assets.

    Not only data integration, but data integrity

    With all the data travelling around in different types and sizes, integrating the data streams across various partners, apps and sources have become critical. But it’s more complex than ever.

    Due to the sizes and variety of data being generated, not to mention the ever-increasing speed in go to market scenarios, companies should look for technology partners that can help them achieve this integration and integrity, either on premise or in the cloud.

    Your 3 step plan to trusted data

    Step 1: Discover and cleanse your data

    A recent IDC study found that only 19% of a data professional’s time is spent analyzing information and delivering valuable business outcomes. They spend 37% of their time preparing data and 24% of their time goes to protecting data. The challenge is to overcome these obstacles by bringing clarity, transparency, and accessibility to your data assets.

    Building this discovery platform, which at the same time allows you to profile your data, to understand the quality of your data and build a confidence score to build trust with the business using the data assets, comes under the form of an auto-profiling data catalog.

    Thanks to the application of Artificial Intelligence (AI) and Machine Learning (ML) in the data catalogs, data profiling can be provided as self-service towards power users.

    Bringing transparency, understanding, and trust to the business brings out the value of the data assets.

    Step 2: Organize data you can trust and empower people

    According to the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms, 2017: “By 2020, organizations that offer users access to a curated catalog of internal and external data will realize twice the business value from analytics investments than those that do not.”

    An important phase in a successful data governance framework is establishing a single point of trust. From the technical perspective this translates to collecting all the data sets together in a single point of control. The governance aspect is the capability to assign roles and responsibilities directly in the central point of control, which allows to instantly operationalize your governance from the place the data originates.

    The organization of your data assets goes along with the business understanding of the data, transparency and provenance. The end to end view of your data lineage ensures compliance and risk mitigation.

    With the central compass in place and the roles and responsibilities assigned, it’s time to empower the people for data curation and remediation, in which an ongoing communication is from vital importance for adoption of a data driven strategy.

    Step 3: Automate your data pipelines & enable data access

    Different layers and technologies make our lives more complex. It is important to keep our data flows and streams aligned and adopt to swift and quick changes in business needs.

    The needed transitions, data quality profiling and reporting can extensively be automated.

    Start small and scale big. A part of intelligence these days can be achieved by applying AI and ML. These algorithms can take the cumbersome work out of the hands of analysts and can also be better and easier scaled. This automation gives the analysts faster understanding of the data and build better faster and more insights in a given time.

    Putting data at the center of everything, implementing automation and provisioning it through one single platform is one of the key success factors in your digital transformation and become a real data-driven organization.

    Source: Talend

  • How to improve your business processes with Artificial Intelligence?

    How to improve your business processes with Artificial Intelligence?

    In the age of digital disruption, even the world’s largest companies aren’t impervious to agile competitors that move quick, iterate fast, and have the capacity to build products faster than their peers. That’s why many legacy organizations are taking a closer look at business process management.

    Simply speaking, business process management is the practice of reengineering existing systems in your firm for better productivity and efficiency. It takes a proactive approach towards identifying business problems and the steps needed to rectify them. And while business process management has traditionally been the forte of management consultants and other functional experts, rapid advancements in artificial intelligence and big data means this sector is also undergoing a fundamental transformation.

    So it begs the question: how do you start “plugging AI” into your company’s existing data and systems?

    Where to begin?

    Artificial intelligence is exciting because it promises to introduce a totally new way to business operations. However, most traditional organizations don’t have the necessary infrastructure and/or computing power to deploy these technologies.

    Moving your data and applications to the cloud is a very popular solution to unlocking the necessary computing resources, but there's a catch. You can’t just copy-paste your files to the cloud and start using AI. Older systems weren’t built with a cloud deployment in mind, so leveraging the cloud usually requires rebuilding your existing software using a common cloud-ready platform like Kubernetes, Pivotal Cloud, and Docker Swarm.

    The point is that once you make a decision towards digital transformation, you need complete buy-in from all areas of the business and a commitment to process and technology changes. Getting that commitment typically involves showcasing the real benefits that AI can unlock. Let’s take a closer look at how artificial intelligence is actively impacting the way companies do their business.

    1. Analyzing sales calls

    When it comes to simulating business processes and operations one crucial aspect is definitely sales calls. That’s because sales, and the ensuing revenue that comes from it, are the bread and butter of your business. Top-tier sales representatives will ensure your firm keeps chugging along and reaching new boundaries.

    In the past, analyzing sales calls was a manual process. There might have been a standard sales playbook with generic questions that each individual would be expected to ask. But now, AI conversational tools like Gong are automating this process entirely.

    Gong is able to record each outbound sales call that your team makes and pick up on cues that help it determine how the call went. So, a successful sales call will probably see the prospect talking more than the sales rep, for example.

    2. Converting voicemail into text

    Have you ever heard the phrase: “Your unhappiest customers are your greatest source of learning?” These famous words were said by none other than Bill Gates. But how can you even accurately quantify customer sentiment if you don’t take the requisite steps to track it?

    It’s certainly possible that a large chunk of your customers don’t want to remain on hold while waiting for a customer support agent and prefer to leave a voicemail instead. Intelligent automation tools like Workato are making it possible to automate voicemail follow-ups, thereby ensuring that no customer falls through the cracks and each one is given an appropriate response to their concerns.

    For example, Workato was able to help automate voicemail follow-ups for a large chain of cafes. Whenever a new voicemail came into its system, the intelligent tool would use speech to text conversion to create a transcript of the voicemail. It would then take that text and add it on the service ticket, giving customer support agents a much better idea of the nature of the complaint and allowing them to resolve it quicker.

    3. Detecting fraud

    Occupational fraud causes organizations to lose about 5% of their total revenue every year with a potential total loss of $3.5 trillion. Machine learning algorithms are actively quelling this trend by spotting discrepancies and anomalies in everyday processes.

    For example, banks and financial institutions use intelligent algorithms to detect suspicious money transfers and payments. This process is also applicable in cybersecurity, tax evasion, customs clearing processes, insurance, and other fields. Large-scale organizations that are able to leverage AI are potentially looking at cost savings in the millions of dollars each year. These resources can then be spent in other critical areas of business such as research and development so companies can stay competitive and ahead of the curve.


    Artificial intelligence isn’t just a fancy buzzword that people are tossing around with willful abandon. In fact, every time you take advantage of Google’s typo detection feature (when you see ‘did you mean’ in the search engine) you’re actually plugging into its DeepMind platform, an example of AI in everyday use.

    AI has the potential to promote greater efficiency, output, less interruption, and, ultimately, higher revenue across businesses of all shapes and sizes.

    Author: Santana Wilson

    Source: Oracle

  • In een intelligente organisatie is er altijd plaats voor een chatbot in HR

    In een intelligente organisatie is er altijd plaats voor een chatbot in HR

    Mensen vormen het hart van een bedrijf, en de afdeling Human Resources is er om voor die mensen te zorgen. HR is de bewaker van de cultuur en zorgt dat werknemers mogelijkheden krijgen om te groeien. Het houdt het bedrijf levendig en gezond. HR draait dus om mensen. Is een virtuele assistent, oftewel een chatbot, tussen al deze mensen wel op zijn plek?

    Hoewel HR draait om de mensen binnen een organisatie, besteden HR-medewerkers ongeveer een vierde van hun tijd aan administratieve taken. Het beantwoorden van vragen van medewerkers is bijvoorbeeld een dagelijks terugkerende taak. Vragen als ‘hoeveel vakantiedagen heb ik nog?’ of ‘wat zijn de regels rond ziekteverlof?’ komen bijna dagelijks aan bod. Een chatbot kan al die vragen van medewerkers beantwoorden. Dit ontziet niet alleen de HR-manager, maar het schept ook direct duidelijkheid voor medewerkers die de vragen stellen. Nooit meer de frustraties van lang wachten op een antwoord op een simpele vraag. Dat klinkt goed toch?

    Een chatbot kan de gestelde vragen ook nauwkeurig bijhouden, om zo knelpunten in het HR-beleid op te merken. Daarnaast wordt een chatbot met de hulp van kunstmatige intelligentie steeds slimmer, naarmate hij meer vragen krijgt. De antwoorden die hij geeft zullen elke dag beter en nauwkeuriger worden. Dit wirdt ook wel machine learning genoemd.

    Persoonlijke antwoorden voor specifieke situaties

    Vooral het aanvragen van verlof is een administratieve taak die vaak veel tijd kost. Denk aan het aanvragen van zwangerschapsverlof bijvoorbeeld. Een bot kan persoonlijke antwoorden en oplossingen geven voor deze specifieke aanvraag.

    Ook tijdens ziekte kan de chatbot een rol spelen. Een van de belangrijkste taken van HR is het zorgen voor een gemotiveerd personeel. Om hieraan bij te dragen kan een chatbot bijvoorbeeld een ‘beterschap’ boodschap sturen als iemand zich ziek meldt. De virtuele assistent kan ook vragen en bijhouden hoe het met diegene gaat, om zo het herstel in het oog te houden.

    Sollicitatieprocedures gladstrijken met een chatbot

    Gezien de huidige arbeidsmarkt is het vaak lastig om nieuw personeel te vinden. Het is daarom essentieel dat het sollicitatieproces vlekkeloos verloopt. Een chatbot kan dit optimaliseren door vragen van een sollicitant direct te beantwoorden. Na het beantwoorden van een vraag, kan de chatbot zelf waardevolle data verzamelenover de sollicitant. De bot slaat de antwoorden op zodat het eenvoudiger wordt om kandidaten te screenen. Niet alleen het leven van de recruiter wordt zo makkelijker, ook dat van de sollicitant.

    Het grootste deel, ongeveer 80%, van mensen die ergens solliciteren, overweegt ergens anders heen te gaan als ze tijdens het proces niet regelmatig updates krijgen over hun sollicitatie. Ze blijven wel aan boord als ze regelmatig op de hoogte gehouden worden over hoe het ervoor staat. Een bot kan een sollicitant op de hoogte houden en zo het proces van recruitment op een positieve noot beginnen. Nadat de sollicitant de selectieprocedure heeft doorlopen en zijn of haar proeftijd in gaat, begint de onboarding. De onboarding is een belangrijke periode om ervoor te zorgen dat een nieuwe medewerker zo snel mogelijk mee kan draaien in de organisatie. In plaats van te werken via een checklist kan de chatbot een groot deel van de onboarding van de HR overnemen en kan de medewerker snel zelf aan de slag. Doordat alle documenten en informatie klaargezet worden in de chatbot kan HR zich meer focussen op het persoonlijke aspect van de onboarding.

    Chatbot voor HR, meer ruimte voor mensen

    Ondanks de opkomst van nieuwe technologie is de wereld van HR er eentje die draait om mensen. Mensen die tijd nodig hebben om er voor elkaar te zijn, in plaats van dat ze zich constant bezig moeten houden met administratieve taken. HR moet zich kunnen richten op de ontwikkeling van medewerkers en als mentor kunnen optreden. HR moet de perfecte nieuwe collega kunnen vinden en de doelen van de organisatie nastreven. Door de inzet van een chatbot kan juist het werk uit handen genomen worden dat zoiets in de weg staat. Zo kan een bedrijf zich niet alleen richten op wat belangrijk is, maar kan het ook zijn medewerkers de ruimte geven te doen waar ze goed in zijn door altijd paraat te staan met de juiste informatie en het juiste advies. Daarom heeft een intelligente organisatie altijd plaats voor een chatbot in HR.

    Auteur: Joris Jonkman

    Bron: Emerce

  • Integrating security, compliance, and session management when deploying AI systems

    Integrating security, compliance, and session management when deploying AI systems

    As enterprises adopt AI (artificial intelligence), they'll need a sound deployment framework that enables security, compliance, and session management.

    As accessible as the various dimensions of AI are to today's enterprise, one simple fact remains: embedding scalable AI systems into core business processes in production depends on a coherent deployment framework. Without it, AI's potential automation and acceleration benefits almost certainly become liabilities, or will never be fully realized.

    This framework functions as a guardrail for protecting and managing AI systems, enabling their interoperability with existing IT resources. It's the means by which AI implementations with intelligent bots interact with one another for mission-critical processes.

    With this method, bots are analogous to railway cars transporting data between sources and systems. The framework is akin to the tracks the cars operate on, helping the bots to function consistently and dependably. It delivers three core functions:

    • Security
    • Compliance and data governance
    • Session management

    With this framework, AI becomes as dependable as any other well-managed IT resource. The three core functions each need to be supported as follows.


    A coherent AI framework primarily solidifies a secure environment for applied AI. AI is a collection of various cognitive computing technologies: machine learning, natural language processing (NLP), etc. Applied AI is the application of those technologies to fundamental business processes and organizational data. Therefore, it's imperative for organizations to tailor their AI frameworks to their particular security needs in accordance with measures such as encryption or tokenization.

    When AI is subjected to these security protocols the same way employees or other systems are, there can be secure communication between the framework and external resources. For example, organizations can access optical character recognition (OCR) algorithms through AWS or cognitive computing options from IBM's Watson while safeguarding their AI systems.

    Compliance (and data governance)

    In much the same way organizations personalize their AI frameworks for security, they can also customize them for the various dimensions of regulatory compliance and data governance. Of cardinal importance is the treatment of confidential, personally identifiable information (PII), particularly with the passage of GDPR and other privacy regulations.

    For example, when leveraging NLP it may be necessary to communicate with external NLP engines. The inclusion of PII in such exchanges is inevitable, especially when dealing with customer data. However, the AI framework can be adjusted so that when PII is detected, it's automatically compressed, mapped, and rendered anonymous so bots deliver this information only according to compliance policies. It also ensures users can access external resources in accordance with governance and security policies.

    Session management

    The session management capabilities of coherent AI frameworks are invaluable for preserving the context between bots for stateful relevance of underlying AI systems. The framework ensures communication between bots is pertinent to their specific functions in workflows.

    Similar to how DNA is passed along, bots can contextualize the data they disseminate to each other. For example, a general-inquiry bot may answer users' questions about various aspects of a job. However, once someone applies for the position, that bot must understand the context of the application data and pass it along to an HR bot. The framework provides this session management for the duration of the data's journey within the AI systems.

    Key benefits

    The outputs of the security, compliance, and session management functions respectively enable three valuable benefits:

    No rogue bots: AI systems won't go rogue thanks to the framework's security. The framework ingrains security within AI systems, extending the same benefits for data privacy. This can help you comply with today's strict regulations in countries such as Germany and India about where data is stored, particularly data accessed through the cloud. The framework prevents data from being stored or used in ways contrary to security and governance policies, so AI can safely use the most crucial system resources.

    New services: The compliance function makes it easy to add new services external to the enterprise. Revisiting the train analogy, a new service is like a new car on the track. The framework incorporates it within the existing infrastructure without untimely delays so firms can quickly access the cloud for any necessary services to assist AI systems.

    Critical analytics: Finally, the session management function issues real-time information about system performance, which is important when leveraging multiple AI systems. It enables organizations to define metrics relevant to their use cases, identify anomalies, and increase efficiency via a machine-learning feedback loop with predictions for optimizing workflows.

    Necessary advancements

    Organizations that develop and deploy AI-driven business applications that can think, act, and complete processes autonomously without human intervention will need a sound deployment framework. Delivering a road map for what data is processed as well as how, where, and why, the framework aligns AI with an organization's core values and is vital to scaling these technologies for mission-critical applications. It's the foundation for AI's transformative potentialand, more important, its enduring value to the enterprise.

    Source: Ramesh Mahalingam

    Author: TDWI

  • Intelligence, automation, or intelligent automation?

    Intelligence, automation, or intelligent automation?

    There is a lot of excitement about artificial intelligence (AI), and also a lot of fear. Let’s set aside the potential for robots to take over the world for the moment and focus on more realistic fears. There is a growing acceptance that AI will change the way we work. There is also agreement that it is likely to result in a number of jobs disappearing or being replaced by AI systems, and others appearing.

    This has fueled the discussion on the ethics around intelligence, especially AI. Thoughtful commentators note that it is unwise to separate the two. Some have suggested frameworks for the ethical development of AI. Underpinning ethical discussion, however, is a question of what AI will be used for exactly. It is hard to develop an ethics framework out of the blue. In this blog, this issue will be unpicked a little, sharing thoughts about where and how AI is used and how this will affect the value that businesses obtain from AI.

    Defining intelligence

    Artfiicial Intelligence has been defined as the ability of a system to interpret data, learn from it, and then use what it has learnt to adapt and therefore achieve particular tasks. There are therefore three elements to AI:

    1. The system has to correctly interpret data and draw the right conclusions.

    2. It must be able to learn from its interpretation.

    3. It must then be able to use what it has learnt to achieve a task. Simply being able to learn or, indeed, to interpret data or perform a task is not enough to make a system AI-based.

    As consumers, most of our contact with AI is with systems like Alexa and Siri. These are definitely "intelligent," in that they take in what we say, interpret it, learn from experience and perform tasks correctly as a result. However, in business, there is general acceptance that much of the real value from AI will come from automation. In other words, AI will be used to mimic or replace human actions. This is now becoming known as 'intelligent automation'.

    Where does intelligent start and automation stop though? There are plenty of tasks that can be automated simply and easily, without any need for an intelligent system. A lot of the time the ability to automate tasks is overshadowing the need for intelligence to drive the automation. This typically results in very well-integrated systems, which often have decision-making capabilities. However, the quality of those decisions is often ignored.

    Good AI algorithms can suggest extremely good options for decisions. Ignoring this limits the value that companies can get out of their investments in AI. Equally, failing to consider whether the quality of the decision is good enough can lead to poor decisions being made. This undermines trust in the algorithm. This results in less use for decisions, again reducing the value. But how can you assess and ensure the quality of the decisions made or recommended by the algorithm?

    Balancing automation and intelligence

    An ideal AI deployment should have a balance between automation and intelligence. If you lean too much towards the automation side and rely on simple rules-based automation, all you will be able to do is collect all the low-hanging fruit in this case. You will therefore miss out on the potential to use the AI system to support more sophisticated decision making. Lean too much towards other direction though, and you get intelligence without automation or systems like Alexa and Siri. Useful for consumers, but not so much for businesses.

    In business, analytics needs to be at the heart of an AI system. The true measure of a successful AI deployment lies in being able to mimic both human action and human decision making.

    An AI deployment has a huge range of components, it would not be unreasonable to describe it as an ecosystem. This ecosystem might contain audio-visual interpretation functions, multisystem and/or multichannel integration, and human-computer interface components. However, none of those would mean anything without the analytical brain at the centre. Without that, the rest of the ecosystem is simply a lifeless body. It needs the analytics component to provide direction and interpretation of the world around it.

    Author: Yigit Karabag

    Source: SAS

  • Investing In Artificial Intelligence

    shutterstock Artificial intelligence is one of the most exciting and transformative opportunities of our time. From my vantage point as a venture investor at Playfair Capital, where I focus on investing and building community around AI, I see this as a great time for investors to help build companies in this space. There are three key reasons.

    First, with 40 percent of the world’s population now online, and more than 2 billion smartphones being used with increasing addiction every day (KPCB), we’re creating data assets, the raw material for AI, that describe our behaviors, interests, knowledge, connections and activities at a level of granularity that has never existed.

    Second, the costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing, making AI applications possible and affordable.

    Third, we’ve seen significant improvements recently in the design of learning systems, architectures and software infrastructure that, together, promise to further accelerate the speed of innovation. Indeed, we don’t fully appreciate what tomorrow will look and feel like.

    We also must realize that AI-driven products are already out in the wild, improving the performance of search engines, recommender systems (e.g., e-commerce, music), ad serving and financial trading (amongst others).

    Companies with the resources to invest in AI are already creating an impetus for others to follow suit — or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.

    How Might You Apply AI Technologies?

    With such a powerful and generally applicable technology, AI companies can enter the market in different ways. Here are six to consider, along with example businesses that have chosen these routes:

    • There are vast amounts of enterprise and open data available in various data silos, whether web or on-premise. Making connections between these enables a holistic view of a complex problem, from which new insights can be identified and used to make predictions (e.g., DueDil*, Premise and Enigma).
    • Leverage the domain expertise of your team and address a focused, high-value, recurring problem using a set of AI techniques that extend the shortfalls of humans (e.g., Sift Science or Ravelin* for online fraud detection).
    • Productize existing or new AI frameworks for feature engineering, hyperparameter optimization, data processing, algorithms, model training and deployment (amongst others) for a wide variety of commercial problems (e.g., H2O.ai, Seldon* and SigOpt).
    • Automate the repetitive, structured, error-prone and slow processes conducted by knowledge workers on a daily basis using contextual decision making (e.g., Gluru, x.ai and SwiftKey).
    • Endow robots and autonomous agents with the ability to sense, learn and make decisions within a physical environment (e.g., Tesla, Matternet and SkyCatch).
    • Take the long view and focus on research and development (R&D) to take risks that would otherwise be relegated to academia — but due to strict budgets, often isn’t anymore (e.g., DNN Research, DeepMind and Vicarious).

    There’s more on this discussion here. A key consideration, however, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productizing technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are proprietary data access/creation, experienced talent and addictive products.

    Which Challenges Are Faced By Operators And Closely Considered By Investors?

    I see a range of operational, commercial and financial challenges that operators and investors closely consider when working in the AI space. Here are the main points to keep top of mind:


    • How to balance the longer-term R&D route with monetization in the short term? While more libraries and frameworks are being released, there’s still significant upfront investment to be made before product performance is acceptable. Users will often be benchmarking against a result produced by a human, so that’s what you’re competing against.
    • The talent pool is shallow: few have the right blend of skills and experience. How will you source and retain talent?
    • Think about balancing engineering with product research and design early on. Working on aesthetics and experience as an afterthought is tantamount to slapping lipstick onto a pig. It’ll still be a pig.
    • Most AI systems need data to be useful. How do you bootstrap your system w/o much data in the early days?


    • AI products are still relatively new in the market. As such, buyers are likely to be non-technical (or not have enough domain knowledge to understand the guts of what you do). They might also be new buyers of the product you sell. Hence, you must closely appreciate the steps/hurdles in the sales cycle.
    • How to deliver the product? SaaS, API, open source?
    • Include chargeable consulting, set up, or support services?
    • Will you be able to use high-level learnings from client data for others?


    • Which type of investors are in the best position to appraise your business?
    • What progress is deemed investable? MVP, publications, open source community of users or recurring revenue?
    • Should you focus on core product development or work closely on bespoke projects with clients along the way?
    • Consider buffers when raising capital to ensure that you’re not going out to market again before you’ve reached a significant milestone. 

    Build With The User In The Loop

    There are two big factors that make involving the user in an AI-driven product paramount. One, machines don’t yet recapitulate human cognition. To pick up where software falls short, we need to call on the user for help. And two, buyers/users of software products have more choice today than ever. As such, they’re often fickle (the average 90-day retention for apps is 35 percent).

    Returning expected value out of the box is key to building habits (hyperparameter optimization can help). Here are some great examples of products that prove that involving the user in the loop improves performance:

    • Search: Google uses autocomplete as a way of understanding and disambiguating language/query intent.
    • Vision: Google Translate or Mapillary traffic sign detection enable the user to correct results.
    • Translation: Unbabel community translators perfect machine transcripts.
    • Email Spam Filters: Google, again, to the rescue.

    We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer-term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.

    What’s The AI Investment Climate Like These Days?

    To put this discussion into context, let’s first look at the global VC market: Q1-Q3 2015 saw $47.2 billion invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA).

    We’re likely to breach $55 billion by year’s end. There are roughly 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well-respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.

    So far, we’ve seen about 300 deals into AI companies (defined as businesses whose description includes such keywords as artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning) from January 1, 2015 through December 1, 2015 (CB Insights).

    In the U.K., companies like Ravelin*, Signal and Gluru* raised seed rounds. approximately $2 billion was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339 million debt+credit), ZestFinance ($150 million debt), LiftForward ($250 million credit) and Argon Credit ($75 million credit). Importantly, 80 percent of deals were < $5 million in size, and 90 percent of the cash was invested into U.S. companies versus 13 percent in Europe. Seventy-five percent of rounds were in the U.S.

     The exit market has seen 33 M&A transactions and 1 IPO. Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532 million; $17 million raised), Elastica/Blue Coat Systems ($280 million; $45 million raised) and SupersonicAds/IronSource ($150 million; $21 million raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7 people.

    Altogether, AI investments will have accounted for roughly 5 percent of total VC investments for 2015. That’s higher than the 2 percent claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software.

    The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the U.S. Businesses must therefore have exposure to this market.

    Which Problems Remain To Be Solved?


    I spent a number of summers in university and three years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is very challenging, expensive, lengthy and regulated, and ultimately offers a transient solution to treating disease.

    Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real time, driving down cost of care over a patient’s lifetime while consequently improving outcomes.

    Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online, and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by third-parties). Sure, the news might paint a different story, but the fact is that we’re still using the web and its wealth of products.

    On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge.

    Look at today’s clinical model. A patient presents into the hospital when they feel something is wrong. The doctor must conduct a battery of tests to derive a diagnosis. These tests address a single (often late-stage) time point, at which moment little can be done to reverse damage (e.g., in the case of cancer).

    Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There are loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions...

    Some companies are already hacking away at this problem:

    • Sano: Continuously monitor biomarkers in blood using sensors and software.
    • Enlitic/MetaMind/Zebra Medical: Vision systems for decision support (MRI/CT).
    • Deep Genomics/Atomwise: Learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
    • Flatiron Health: Common technology infrastructure for clinics and hospitals to process oncology data generated from research.
    • Google: Filed a patent covering an invention for drawing blood without a needle. This is a small step toward wearable sampling devices.
    • A point worth noting is that the U.K. has a slight leg up on the data access front. Initiatives like the U.K. Biobank (500,000 patient records), Genomics England (100,000 genomes sequenced), HipSci (stem cells) and the NHS care.data program are leading the way in creating centralized data repositories for public health and therapeutic research.

    Enterprise Automation

    Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9 trillion by 2020 (BAML). Coupled with the efficiency gains worth $1.9 trillion driven by robots, I reckon there’s a chance for near-complete automation of core, repetitive businesses functions in the future.

    Think of all the productized SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making.

    Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfillment and shipping. Of course, this is probably a ways off.

    I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long-term innovation, especially considering that far less is occurring within universities. VC was born to fund moonshots.

    We must remember that access to technology will, over time, become commoditized. It’s therefore key to understand your use case, your user, the value you bring and how it’s experienced and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering.

    Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g., the user experience layer ). As such, there’s renewed focus on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.

    Finally, you must have exposure to the U.S. market, where the lion’s share of value is created and realized. We have an opportunity to catalyze the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond.

    Source: TechCrunch

  • Is AI a threat or an opportunity to data engineers?

    Is AI a threat or an opportunity to data engineers?

    Humans losing jobs to robots has been the preoccupation of economists and sci-fi writers alike for almost 100 years. AI systems are the next perceived threat to human jobs, but which jobs? Sourcing the logic from numerous open source packages or paid API services, connecting disparate datasets, and maintaining a pipeline are complex tasks that AIs are ill-suited to do at present. 

    AI and the data pipeline

    A well set up data pipeline is a thing of beauty, seamlessly connecting multiple datasets to a business intelligence tool to allow clients, internal teams, and other stakeholders to perform complex analysis and get the most out of their data. 

    Data engineers thrive on interesting challenges: bringing terabytes of data from wherever it lives to where it can be analyzed, transforming it using various libraries and services, and keeping the pipeline stable. However, the data preparation phase of the whole process poses its own issues. It can be a creative process, and it’s certainly necessary, but saving and automating the repetitive usage of the logic every X amount of hours is a challenge. Today, the way to solve this challenge is by bringing in artificial intelligence and machine learning.

    Augmented analytics is the next iteration of business intelligence, where AI elements are incorporated into every phase of the BI process. The powerful AI (artificial intelligence) analytics systems that are emerging today have AI assisting users in a broad range of ways, but we’ll stay focused on data prep for this article. 

    Three sections of the data preparation process where AI can help that we’ll discuss are data cleaning and transformation, extracting and loading, and verifying the prepared data. 

    Clean as you go

    The saying 'data is the new oil' gets tossed around enough to have already become a cliche, but for purposes of our discussion it’s an especially apt metaphor. Most companies are sitting on huge stores of data, but in its unprocessed form, it’s not very useful. Even worse, analyzing non-normalized data boils down to potentially harmful and misleading results. To continue with the oil metaphor, you need a stable and reliable pipeline to take your data from where it’s stored to where it’ll be processed so that its true value can be harnessed.

    While you’re moving that data, data engineers have the ability to digest it so that it’s closer to being in a usable state by the time it hits the BI system. BI platforms are already using AI to help with the data cleansing process in a variety of ways. Let’s walk through how AI can assist you:

    1. AI assistance can recommend a date model structure, including which columns to join, which to compound, and maybe even create dimension tables to facilitate the fact table joins.
    2. AI systems can apply simple rulesets to help standardize the data by doing things like making all text lowercase and removing blank spaces before and after values. 
    3. If you already have a perfectly formatted dataset to use as a learning dataset, AI assistance can even be trained on this to recognize how the larger dataset should look, allowing it to take a holistic approach to cleansing, rather than you telling it specific tasks to do. 
    4. As AI assistance learns how you want your data to look, the system can even scan all the columns and make recommendations as to what to fix, implement active learning, or go ahead and fix errors on its own, such as removing redundant records (deduplication caused by misspelling, for example) or using context clues to fill in missing values. 

    Extracting and loading

    The rise of cloud data warehouses has changed the way companies treat their data. In the past, well-organized databases were needed to keep records in order. Today, data comes from a wide array of different sources and in a variety of different forms, from user-generated to sensory data. More and more frequently we even witness companies using third party data to enrich their business logic (how the weather forecast will affect my sales?). 

    This change coincided with an increase in the sophistication of AI data analytics systems, allowing them to deal with data in all its types, structured (numerical) and unstructured (text, image, video). Data storage on cloud warehouses like Redshift is so cheap and there can often be different roles responsible for data gathering and storage, so rather than worry about how everything is formatted, companies just pump everything into the warehouse, however it’s formatted, and deal with it later.

    This is another place where BI with AI has a chance to shine, extracting the data, performing transformations on it, then loading it into the BI tool. The same AI abilities mentioned before can be applied in this way to end up with usable data at the endpoint: removing duplicate records, filling blank values, and suggesting other cleansing and transformation actions, such as clustering and segmentation, based on the learning dataset. However your data is stored, the right AI analytics tool can help get it into better shape for when you create your single source of truth; it can also help as you load your data into your BI platform or data science tool.

    While you’re moving your data into your BI system, the big chance for an AI assist is in monitoring the process. If a load fails, exceeds the normal time threshold or the forecasted one, the AI can learn that and ping the engineer to let them know there’s a problem. A sudden change in the volume of data being loaded could also be worth a mention, so that the engineer can look into it and see if there’s a larger problem. 

    The bottom line is that a strong AI analytics system can be a second set of eyes for a busy data engineering team, freeing them to focus on the challenges that drive more value to the analytics team, and ultimately the business.

    Outliers, efficiency, and verifying results

    Outlier detection is one task that an AI system can be designed to handle that would have huge benefits for data engineers dealing with large volumes of not-quite-perfect data. The AI would monitor tables as they get created and new data gets loaded, and check the outputs. As the system scans the values within a column, it could test for things like uniqueness, referential integrity (to values that are keys in other tables), skewed distribution, null values, and accepted values. It would basically be checking the whole table and saying 'does this column look correct'? based on a series of rules that could be applied to it. If the AI believes that one of the rules could apply, and that the columns values do not meet the rule’s conditions, then it would send an alert to the engineers.

    Trusting your data without checking your work is a recipe for disaster. Having a few questions you already know ballpark answers to can be a great way to test your AI-prepped data in the aftermath. If your answers come back within acceptable limits, then you know the prep process was (acceptably) successful. If there are major discrepancies, you may have to retrain the system or adjust the strictness/laxness of the settings you’re using.

    Some other tasks a BI system with AI can assist with include showing you which joins are occurring most frequently across your model and suggesting pre-aggregation. This could prove useful for data analysts to know and help them with speedier queries down the road. AI could also scan columns and test for uniqueness. For example, if every value needs to be unique, like an ID column for all your Salesforce accounts, and there are two different users with the same account ID, then the AI could call that out. For purely numerical data, AI could identify outliers that might indicate improperly entered data. Either way, the AI is once again an extra set of eyes, performing detailed, routine work, at scale, and surfacing the results to human data engineers only when necessary. 

    Is AI taking engineering jobs?

    Although humans losing jobs to robots is a nice story, in reality, it is far from the truth for data engineers. Tackling routine tasks like eliminating redundant data, filling in gaps in datasets, and pinging human engineers when anomalies arise are all places where AI analytics systems can really add value, doing the heavy lifting that humans don’t really want to do anyway, and augment hard-working data engineers to tackle the challenging problems that will lead to bigger rewards for the company down the line.

    Author: Inna Tokarev Sela

    Source: Sisense

  • Is Artificial Intelligence shaping the future of Market Intelligence?

    Is Artificial Intelligence shaping the future of market intelligence?

    Global developments, increasing competition, shifting consumer demands... These are only a few of the countless external forces that will shape the exciting world of tomorrow.

    As a company, how can you be prepared for rapid changes in the environment?

    That's where market intelligence proves its value.

    Companies require proactive and forward-thinking market intelligence in order to detect and react to critical market signals. This kind of intelligence is critical to guarantee sustainable profits and ensure survival in today’s highly competitive environment.

    The market intelligence field over the years

    Just like the world itself, the market intelligence field has seen some major changes over the past couple of decades. For example, the rise and popularity of social media has made it notably easier to track data about consumers and competitors. It is widely accepted that this field will undergo changes at an even higher pace in the future, due to significant technical, social and organizational developments.

    But what are the developments and trends that will impact market intelligence most over the next few years? According to the research paper State of the Art and Trends of Market Intelligence, the most impactful developments are Artificial Intelligence, Data Visualization, and the GDPR legislation. The focus of this article is on the role of Artificial Intelligence (AI).

    Artificial Intelligence

    Artificial Intelligence is the intelligence displayed by machines, often characterized by learning and the ability to adapt to changes. According to Qualtrics, 93% of market researchers see AI as an opportunity for the research business.

    Where can AI add value?

    AI can add value in the processing of large and unstructured datasets. Open-ended data can be processed with ease due to the use of AI technologies such as Natural Language Processing (NLP), for example. NLP enables computers to understand, interpret and manipulate natural human language. This way NLP can assist in tracking sentiments from different sentences. This can be applied in business in for example the assessment of reviews, which usually is a slow task. With NLP however, this process can be streamlined efficiently.

    NLP can also be used as an add-on for language translation programs. It allows for the rudimentary translation of a text, before it is translated by a human. This method also makes it possible to quickly translate reports and documents written in another language, which can be very beneficial during the collection of raw data.

    Additionally, NLP can assist with practices like Topic Modeling, which consists of the automatic generation of keywords for different articles and blogs. This tool makes the process of building a huge set of labeled data more convenient. Another method, which also utilizes NLP, is Text Classification: an algorithm that can automatically suggest a related category for a specific article or news item.

    Desk research is extremely valuable in the process of gathering relevant market intelligence. However, it is very time-consuming. This is problematic, because important insights may not arrive at the desk of the specific decision maker in time. This can be detrimental to a company’s ability to react quickly in a fast-changing business environment. AI can speed up this process, as it can rapidly read all kind of sources and identify trends significantly faster than traditional desk research ever could.

    The future of market intelligence

    Clearly, the applications mentioned in this article are just a selection of the wide range of possibilities AI is providing within the field of market research and intelligence. The popularity of this technology is increasing rapidly, and it can unlock stunning amounts of relevant and rich information for all kind of fields.

    Does this imply that traditional methods and analysis are redundant and not needed anymore?

    Of course not! AI also has its own limitations.

    In the next few years, the true value of AI and other technological developments will be shown. The real power lies in the combination of AI with more traditional research methods. The results will allow businesses to arrive at actionable insights faster, and in turn, improve solid and data-driven decision-making. This way market intelligence can help companies take the steps that lead to tomorrow’s success.

    Author: Kees Kuiper

    Source: Hammer Intel

  • Is de multicloud een reden voor de langzame adoptie van AI bij organisaties?

    Is de multicloud een reden voor de langzame adoptie van AI bij organisaties?

    Ondanks de enorme potentie verloopt de adoptie van AI relatief langzaam. Volgens Efrym Willems, Business Development IBM Watson, Analytics, IoT & IBM Cloud bij Tech Data, is de multicloud een veelgehoorde reden om de technologie links te laten liggen. Volgens hem onterecht. 'AI is inmiddels ook in multicloudomgevingen een realistische optie'.

    Begin 2019 kondigde low-codeontwikkelaar Mendix een verregaande integratie aan van het eigen platform met IBM Cloud Services. Applicatieontwikkelaars krijgen daarmee eenvoudige toegang tot de functies van het artificial intelligence (AI)-platform IBM Watson. Bovendien draaien applicaties ontwikkeld met Mendix direct in de IBM Cloud. Dat lijkt op het eerste oog een detail, maar is een belangrijke stap voor de bredere adoptie van AI in multicloudomgevingen. Dat is een welkome ontwikkeling. Volgens analisten en AI-leveranciers blijft de adoptie van AI achter. De multicloud staat daarbij in de weg. Organisaties weten niet hoe ze het versnipperde datalandschap bijeen kunnen brengen. 'Toch bewijst IBM dat multicloud helemaal geen drempel hoeft te zijn', aldus Willems.

    AI maakt samenvatting en stelt diagnose

    Goed nieuws, want AI is bewezen effectief. Een recent voorbeeld is de samenvatting van de Wimbledon-finale, die geheel werd samengesteld door het AI-systeem IBM Watson. 'Het gevecht tussen de tennislegendes Roger Federer en Novak Djokovic tijdens de finale van Wimbledon 2019 duurde bijna vijf uur. Toch stond twee minuten na de wedstrijd een samenvatting paraat. Het systeem selecteerde daarbij de hoogtepunten op basis van geluid en gezichtsuitdrukkingen van het publiek. Twintig minuten na de finale waren zelfs gepersonaliseerde samenvattingen beschikbaar', vertelt de Tech Data-expert. AI heeft inmiddels ook zijn waarde bewezen als het gaat om bijvoorbeeld het optimaliseren van productieprocessen of het verbeteren van de gezondheidszorg. Zo experimenteren Nederlandse ziekenhuizen volop met de toepassing van AI, onder andere op het gebied van diagnostiek. 'Watson stelde binnen 10 minuten de juiste diagnose bij een vrouw, een zeldzame vorm van leukemie'.


    Toch zijn organisaties niet massaal op de AI-trein gesprongen. Volgens onderzoek heeft slechts een kwart van de organisaties een bedrijfsbrede AI-strategie. Vaak gooien zorgen rondom data-integratie roet in het eten. 'De gestructureerde en ongestructureerde data die nodig zijn voor analyses, staan vaak verspreid over meerdere locaties, zowel in de cloud als on-premises', legt Willems uit. 'Maar dat hoeft inmiddels geen probleem meer te zijn. In de multicloud is een effectieve inzet van AI goed mogelijk', aldus Willems. Wel zijn een aantal voorwaarden belangrijk:

    1. AI op alle platforms

    Een goede werking van AI vraagt aanwezigheid van de technologie op alle platforms waar de data en applicaties in gebruik zijn. 'Dat is precies de reden dat IBM zijn Watson-oplossing beschikbaar heeft gemaakt voor diverse platformen, via microservices', legt Willems uit. “Deze draaien in een Kubernetes-container. 'Die microservices draaien on-premises of in de IBM Cloud, maar functioneren ook prima in de clouds van bijvoorbeeld Microsoft, Amazon en Google. De AI komt dus naar de data, in plaats van dat alle data naar de AI moeten komen. Deze aanpak biedt bovendien een ander voordeel: het voorkomt dat organisaties vastzitten aan een specifieke omgeving'.

    2. Dataconnectoren

    Bovenstaand gegeven is niet voor iedere organisatie voldoende. Data kunnen nog verder versnipperd zijn, bijvoorbeeld in omgevingen als Dropbox, Salesforce, Tableau en Looker. In die gevallen is het belangrijk dat voor deze omgevingen dataconnectoren beschikbaar zijn. Zo kan de AI-oplossing alsnog gebruikmaken van de daar opgeslagen gegevens. IBM heeft daarnaast Watson Studio, het platform voor datascience en machine learning, vorig jaar verrijkt met een verbeterde integratie met Hadoop Distributions (CDH en HDP). Volgens Willems is het daardoor eveneens mogelijk om analytics uit te voeren daar waar de data staan en gebruik te maken van de beschikbare rekenkracht.

    3. Alternatief: data naar één plek

    Een alternatief is het samenbrengen van datasets naar een centraal platform. 'IBM Cloud, wat sinds 2018 de nieuwe naam is voor SoftLayer, biedt die mogelijkheid. Bijvoorbeeld met IaaS- of PaaS-diensten, of door simpelweg cloudstorage te bieden'. Het is daarnaast mogelijk IaaS- en PaaS-diensten te integreren in een multicloudomgeving, voegt Willems eraan toe.

    4. Brede ondersteuning ontwikkeltools

    In het hierboven geschetste scenario is de integratie van Mendix met IBM Cloud een belangrijke ontwikkeling voor AI-adoptie. 'Na consolidatie van de data kunnen speciaal daarvoor gebouwde apps de data ontsluiten en analyseren', zegt Willems. 'Het ontwikkelen van die apps gaat snel en relatief eenvoudig met low-code en no-code platformen van aanbieders als Mendix of OutSystems'. Daarnaast is uiteraard ook IBM Bluemix, de developertoolset van IBM, inmiddels onder de vlag van IBM Cloud beschikbaar.

    Geen obstakel

    AI kan met bovengenoemde aandachtspunten onafhankelijk van de gekozen clouddeployment waarde toevoegen. 'Of een organisatie nu de AI naar de data brengt of andersom: een multicloudomgeving is in beide gevallen geen obstakel meer', besluit Willems.

    Bron: BI-Platform

  • Machine learning, AI, and the increasing attention for data quality

    Machine learning, AI, and the increasing attention for data quality

    Data quality has been going through a renaissance recently.

    As a growing number of organizations increase efforts to transition computing infrastructure to the cloud and invest in cutting-edge machine learning and AI initiatives, they are finding that the main barrier to success is the quality of their data.

    The old saying “garbage in, garbage out” has never been more relevant. With the speed and scale of today’s analytics workloads and the businesses that they support, the costs associated with poor data quality are also higher than ever.

    This is reflected in a massive uptick in media coverage on the topic. Over the past few months, data quality has been the focus of feature articles in The Wall Street Journal, Forbes, Harvard Business Review, MIT Sloan Management Review and others. The common theme is that the success of machine learning and AI is completely dependent on data quality. A quote that summarizes this dependency very well is this one by Thomas Redman: ''If your data is bad, your machine learning tools are useless.''

    The development of new approaches towards data quality

    The need to accelerate data quality assessment, remediation and monitoring has never been more critical for organizations and they are finding that the traditional approaches to data quality don’t provide the speed, scale and agility required by today’s businesses.

    For this reason, highly rated data preparation business Trifacta recently announced an expansion into data quality and unveiled two major new platform capabilities with active profiling and smart cleaning. This is the first time Trifacta has expanded our focus beyond data preparation. By adding new data quality functionality, the business aims to gain capabilities to handle a wider set of data management tasks as part of a modern DataOps platform.

    Legacy approaches to data quality involve many manual, disparate activities as part of a broader process. Dedicated data quality teams, often disconnected from the business context of the data they are working with, manage the process of profiling, fixing and continually monitoring data quality in operational workflows. Each step must be managed in a completely separate interface. It’s hard to iteratively move back-and-forth between steps such as profiling and remediation. Worst of all, the individuals doing the work of managing data quality often don’t have the appropriate context for the data to make informed decisions when business rules change or new situations arise.

    Trifacta uses interactive visualizations and machine intelligence guides help users by highlighting data quality issues and providing intelligent suggestions on how to address them. Profiling, user interaction, intelligent suggestions, and guided decision-making are all interconnected and drive the other. Users can seamlessly transition back-and-forth between steps to ensure their work is correct. This guided approach lowers the barriers to users and helps to democratize the work beyond siloed data quality teams, allowing those with the business context to own and deliver quality outputs with greater efficiency to downstream analytics initiatives.

    New data platform capabilities like this are only a first (albeit significant) step into data quality. Keep your eyes open and expect more developments towards data quality in the near future!

    Author: Will Davis

    Source: Trifacta

  • Making AI actionable within your organization

    Making AI actionable within your organization

    It can be really frustrating to run a successful pilot or implement an AI system without it getting widespread adoption through your organization. Operationalizing AI is a really common problem. It may seem that everyone else is using AI to make a huge difference in their business while you’re struggling to figure out how to operationalize the results you’ve gotten from trying a few AI systems.

    There has been so much advancement in AI, so how can you make this great technology actually translate into actionable business results?

    This is a real common problem that has been touching enterprises of all kind, from the biggest companies to mid-sized businesses.

    Here are a few quick pointers on how to turn your explorations in AI into AI practices leading to real results from investments.

    Pragmatic AI

    Firstly, focus on what gets called 'Pragmatic AI', practical AI that has obvious business applications. It’s going to be a long time before we have 'strong AI', so look for solutions that were made by examining problems that businesses deal with every day and then decide to use artificial intelligence to solve the problem. It’s great that your probabilistic Bayesian system is thinking of the world differently or that a company feels like they’ve taken a shortcut around some of the things that make Deep Learning systems slow to train, but what does that mean for the end user of the artificial intelligence? When you’re looking for a practical solution, look for companies who are always trying to improve their user experience and where a PhD in machine learning isn’t needed to write the code.

    Internal valuations

    Similarly, change the way you are considering bringing an AI solution into your company. AI works best when the company isn’t trying to do a science fair project. It works best when it is trying to solve a real business problem. Before evaluating vendors in any particular AI solution or going out to see how RPA solutions really work, talk to users around your business. Listen to the problems they have and think about what kind of solutions would make a huge difference. By making sure that the first AI solution you bring into your organization aligns to business goals, you are much more likely to succeed in getting widespread adoption and a green light to try additional new technologies when it comes time to review budgets.

    And no matter how technology-forward your organization is, AI adoption works best when everyone can understand the results. Pick a KPI focused problem like conversion, customer service, or NPS where the results can be understood without thinking about technology. This helps get others outside of the science project mentality to open their minds on how AI can be used throughout the business.

    Finally, don’t forget that AI can help in a wide variety of ways. Automation is a great place to use AI within an organization, but remember that in many use cases, humans and computers do more together than separately and great uses for AI technology help your company’s employees do their job better or focus on the right pieces of data. These solutions often provide as much value as pure automation!

    Source: Insidebigdata

  • MicroStrategy: Take your business to the next level with machine learning

    MicroStrategy: Take your business to the next level with machine learning

    It’s been nearly 22 years since history was made across a chess board. The place was New York City, and the event was Game 6 of a series of games between IBM’s “Deep Blue” and the renowned world champion Garry Kasparov. It was the first time ever a computer had defeated a player of that caliber in a multi-game scenario, and it kicked off a wave of innovation that’s been methodically working its way into the modern enterprise.

    Deep Blue was a formidable opponent because of its brute force approach to chess. In a game where luck is entirely removed from the equation, it could run a search algorithm on a massive scale to evaluate move, discarding candidate moves once they proved to be less valuable than a previously examined and still available option. This giant decision tree powered the computer to a winning position in just 19 moves with Kasparov resigning.

    As impressive as Deep Blue was back then, present-day computing capabilities are much stronger, by orders of magnitude, inspired by the neural network of the human brain. Data scientists create inputs and define outputs detect previously indecipherable patterns, important variables that influence games, and ultimately, the next move to take.

    Models can also continue to ‘learn’ from playing different scenarios and then update the model through a process called ‘reinforcement learning’ (as the Go-playing AlphaZero program does). The result of this? The ability to process millions of scenarios in a fraction of a second to determine the best possible action, with implications far beyond the gameboard.

    Integrating machine learning models into your business workflows comes with its challenges: business analysts are typically unfamiliar with machine learning methods and/or lack the coding skills necessary to create viable models; integration issues with third-party BI software may be a nonstarter; and the need for governed data to avoid incorrectly trained models is a barrier to success.

    As a possible solution, one could use MicroStrategy as a unified platform for creating and deploying data science and machine learning models. With APIs and connectors to hundreds of data sources, analysts and data scientists can pull in trusted data. And when using the R integration pack, business analysts can produce predictive analytics without coding knowledge and disseminate those results throughout their organization.

    The use cases are already coming in as industry leaders put this technology to work. As one example, a large governmental organization reduced employee attrition by 10% using machine learning, R, and MicroStrategy.

    Author: Neil Routman

    Source: MicroStrategy

  • Organizing Big Data by means of using AI

    Artificial IntelligenceNo matter what your professional goals are, the road to success is paved with small gestures. Often framed via KPIs – key performance indicators, these transitional steps form the core categories contextualizing business data. But what 

    data matters?

    In the age of big data, businesses are producing larger amounts of information than ever before and there needs to be efficient ways to categorize and interpret that data. That’s where AI comes in.

    Building Data Categories

    One of the longstanding challenges with KPI development is that there are countless divisions any given business can use. Some focus on website traffic while others are concerned with social media engagement, but the most important thing is to focus on real actions and not vanity measures. Even if it’s just the first step toward a sale, your KPIs should reflect value for your bottom line.


    Small But Powerful

    KPIs typically cover a variety of similar actions – all Facebook behaviors or all inbound traffic, for example. The alternative, though, is to break down KPI-type behaviors into something known as micro conversions. 

    Micro conversions are simple behaviors that signal movement toward an ultimate goal like completing a sale, but carefully gathering data from micro conversions and tracking them can also help identify friction points and other barriers to conversion. This is especially true any time your business undergoes a redesign or institutes a new strategy. Comparing micro data points from the different phases, then, is a high value means of assessment.

    AI Interpretation

    Without AI, this micro data would be burdensome to manage – there’s just so much of it –but AI tools are both able to collect data and interpret it for application, particularly within comparative frameworks. All AI needs is well-developed KPIs.

    Business KPIs direct AI data collection, allow the system to identify shortfalls, and highlight performance goals that are being met, but it’s important to remember that AI tools can’t fix broader strategic or design problems. With the rise of machine learning, some businesses have come to believe that AI can solve any problem, but what it really does it clarify the data at every level, allowing your business to jump into action.

    Micro Mapping

    Perhaps the easiest way to describe what AI does in the age of big data is with a comparison. Your business is a continent and AI is the cartographer that offers you a map of everything within your business’s boundaries. Every topographical detail and landmark is noted. But the cartographer isn’t planning a trip or analyzing the political situation of your country. That’s up to someone else. In your business, that translates to the marketing department, your UI/UX experts, or C-suite executives. They solve problems by drawing on the map.

    Unprocessed big data is overwhelming – think millions of grains of sand that don’t mean anything on their own. AI processes that data into something useful, something with strategic value. Depending on your KPI, AI can even draw a path through the data, highlighting common routes from entry to conversion, where customers get lost – what you might consider friction points, and where they engage. When you begin to see data in this way, it becomes clear that it’s a world unto itself and one that has been fundamentally incomprehensible to users. 

    Even older CRM and analytics programs fall short when it comes to seeing the big picture and that’s why data management has changed so much in recent years. Suddenly, we have the technology to identify more than click-through-rates or page likes. AI fueled by big data is a new organization era with an emphasis on action. If you’re willing to follow the data, AI will draw you the map


    Author: Lary Alton

    Source: Information Management

  • Pattern matching: The fuel that makes AI work

    Pattern matching: The fuel that makes AI work

    Much of the power of machine learning rests in its ability to detect patterns. Much of the basis of this power is the ability of machine learning algorithms to be trained on example data such that, when future data is presented, the trained model can recognize that pattern for a particular application. If you can train a system on a pattern, then you can detect that pattern in the future. Indeed, pattern matching in machine learning (and its counterpart in anomaly detection) is what makes many applications of artificial intelligence (AI) work, from image recognition to conversational applications.

    As you can imagine, there are a wide range of use cases for AI-enabled pattern and anomaly detection systems. Pattern recognition, one of the seven core patterns of AI applications, is being applied to fraud detection and analysis, finding outliers and anomalies in big stacks of data; recommendation systems, providing deep insight into large pools of data; and other applications that depend on identification of patterns through training.

    Fraud detection and risk analysis

    One of the challenges with existing fraud detection systems is that they are primarily rules-based, using predefined notions of what constitutes fraudulent or suspicious behavior. The problem is that humans are particularly creative at skirting rules and finding ways to fool systems. Companies looking to reduce fraud, suspicious behavior or other risk are finding solutions in machine learning systems that can either be trained to recognize patterns of fraudulent behavior or, conversely, find outliers and anomalies to learned acceptable behavior.

    Financial systems, especially banking and credit card processing institutions, are early adopters in using machine learning to enable real-time identification of potentially fraudulent transactions. AI-based systems are able to handle millions of transactions per minute and use trained models to make millisecond decisions as to whether a particular transaction is legitimate. These models can identify which purchases don't fit usual spending patterns or look at interactions between paying parties to decide if something should be flagged for further inspection.

    Cybersecurity firms are also finding significant value in the application of machine learning-based pattern and anomaly systems to bolster their capabilities. Rather than depending on signature-based systems, which are primarily oriented toward responding to attacks that have already been reported and analyzed, machine learning-based systems are able to detect anomalous system behavior and block those behaviors from causing problems to the systems or networks.

    These AI-based systems are able to adapt to continuously changing threats and can more easily handle new and unseen attacks. The pattern and anomaly systems can also help to improve overall security by categorizing attacks and improving spam and phishing detection. Rather than requiring users to manually flag suspicious messages, these systems can automatically detect messages that don't fit the usual pattern and quarantine them for future inspection or automatic deletion. These intelligent systems can also autonomously monitor software systems and automatically apply software patches when certain patterns are discovered.

    Uncovering insights in data

    Machine learning-based pattern recognition systems are also being applied to extract greater value from existing data. Machines can look at data to find insights, patterns and groupings and use the power of AI systems to find patterns and anomalies humans aren't always able to see. This has broad applicability to both back-office and front-office operations and systems. Whereas, before, data visualization was the primary way in which users could extract value from large data sets, machine learning is now being used to find the groupings, clusters and outliers that might indicate some deeper connection or insight.

    In one interesting example, through machine learning pattern analysis, Walmart discovered consumers buy strawberry pop-tarts before hurricanes. Using unsupervised learning approaches, Walmart identified the pattern of products that customers usually buy when stocking up ahead of time for hurricanes. In addition to the usual batteries, tarps and bottled water, it discovered that the rate of purchase of strawberry pop-tarts also increased. No doubt, Walmart and other retailers are using the power of machine learning to find equally unexpected, high-value insights from their data.

    Automatically correcting errors

    Pattern matching in machine learning can also be used to automatically detect and correct errors. Data is rarely clean and often incomplete. AI systems can spot routine mistakes or errors and make adjustments as needed, fixing data, typos and process issues. Machines can learn what normal patterns and behavior look like, quickly spot and identify errors, automatically fix issues on its own and provide feedback if needed.

    For example, algorithms can detect outliers in medical prescription behavior, flag these records in real time and send a notification to healthcare providers when the prescription contains mistakes. Other automated error correction systems are assisting with document-oriented processes, fixing mistakes made by users when entering data into forms by detecting when data such as names are placed into the wrong fields or when other information is incomplete or inappropriately entered.

    Similarly, AI-based systems are able to automatically augment data by using patterns learned from previous data collection and integration activities. Using unstructured learning, these systems can find and group information that might be relevant, connecting all the data sources together. In this way, a request for some piece of data might also retrieve additional, related information, even if not explicitly requested by the query. This enables the system to fill in the gaps when information is missing from the original source, correct errors and resolve inconsistencies.

    Industry applications of pattern matching systems

    In addition to the applications above, there are many use cases for AI systems that implement pattern matching in machine learning capabilities. One use case gaining steam is the application of AI for HR and staffing. AI systems are being tasked to find the best match between job candidates and open positions. While traditional HR systems are dependent on humans to make the connection or use rules-based matching systems, increasingly, HR applications are making use of machine learning to learn what characteristics of employees make the best hires. The systems learn from these patterns of good hires to identify which candidates should float to the surface of the resume pile, resulting in more optimal matches.

    Since the human is eliminated in this situation, AI systems can be used to screen candidates and select the best person, while reducing the risk of bias and discrimination. Machine learning systems can sort through thousands of potential candidates and reach out in a personalized way to start a conversation. The systems can even augment the data in the job applicant's resume with information it gleans from additional online sources, providing additional value.

    In the back office, companies are applying pattern recognition systems to detect transactions that run afoul of company rules and regulations. AI startup AppZen uses machine learning to automatically check all invoices and receipts against expense reports and purchase orders. Any items that don't match acceptable transactional patterns are sent for human review, while the rest are expedited through the process. Occupational fraud, on average, costs a company 5% of its revenues each year, with the annual median loss at $140,000, and over 20% of companies reporting losses of $1 million or more.

    The key to solving this problem is to put processes and controls in place that automatically audit, monitor, and accept or reject transactions that don't fit a recognized pattern. AI-based systems are definitely helping in this way, and we'll increasingly see them being used by more organizations as a result.

    Author: Ronald Schmelzer

    Source: TechTarget

  • Pyramid Analytics' 5 main takeaways from the Insurance AI and Analytics USA conference in Chicago

    Pyramid Analytics' 5 main takeaways from the Insurance AI and Analytics USA conference in Chicago

    Pyramid Analytics was thrilled to participate in the Insurance AI and Analytics USA conference in beautiful Chicago, May 2-3. The goal of the conference was to provide education to insurance leaders looking for ways to use AI and ML to extract more value out of their data. In all of their conversations, the eagerness to do more with data was palpable, but a tinge of frustration could be detected beneath the surface.

    Curious to understand this contradiction, they started most of their conversations with the same basic question: 'What brings you to the show?' Followed by a slightly deeper question: 'Where are you with your AI and ML initiatives?'

    The responses varied. However, a common thread emerged: despite the desire to incorporate AI and ML capabilities into routine business practices, roadblocks remain, regardless of carrier type. Chief among the concerns of the attendees was the ability to access data, it appears that data silos are alive and well. We also heard many express frustrations with the tools used to derive AI and ML insights.

    Here are some observations of the most common reasons for attending the show into five groups, organized by persona:

    1. Data scientists looking for deeper access to data 

    The data scientists seemed to struggle with data access, which is often trapped within departments throughout the organization. To do their jobs effectively, data scientists need to access data so they can unlock trapped business value. They were seeking solutions that would help them bridge the gap between data and analytics.

    2. Executives from traditional organizations trying to understand the way forward

    To varying degrees, the insurance executives had AI and ML programs in place but weren’t satisfied with the results. They attended the conference to learn how they could extract more value from their AI and ML initiatives.

    3. Sophisticated insurers seeking technology to gain an edge on the competition

    This was a general takeaway from indivivuals from newer insurance companies who fit squarely in the “early technology adopter” category. Lacking the constraints of typical insurers (legacy processes and systems), these individuals were seeking information on new technologies and hoping to build partnerships with vendors to achieve further differentiation.

    4. Data and technology vendors looking to build meaningful partnerships

    There were many representatives from data and technology companies seeking out insurance partners looking to advance their businesses at the margins, either by enriching existing data store or by finding new or unique data streams.

    5. Consultants promoting their unique approach to AI and ML initiatives

    It’s clear that AI and ML initiatives require more than just tools, people, and processes. They require strategic direction and a roadmap that builds consistency and accountability. There were a number of consultants making themselves available to insurers.

    Author: Michael Hollenbeck

    Source: Pyramid Analytics

  • Routinebanen worden opgeslokt door robots en artificial intelligence

    Robots en artificial intelligence zijn anno 2016 al ver genoeg ontwikkeld om een relatief groot deel van het fysieke voorspelbare werk en dataverwerkingstaken van mensen over te nemen. Bovendien zal technologische vooruitgang ervoor zorgen dat steeds meer taken van mensen worden overgenomen, wat ofwel leidt tot meer tijd voor andere taken, of een vermindering van het aantal menselijke werknemers.

    Automatisering en robotisering bieden de mensheid de mogelijkheid om zich te bevrijden van repetitief, fysiek werk, dat vaak als onplezierig of saai wordt ervaren. Hoewel het verdwijnen van dit werk zal zorgen voor positieve effecten op aspecten als gezondheid en werkkwaliteit, heeft de ontwikkeling ook negatieve effecten op de werkgelegenheid – zeker in banen waarvoor weinig vaardigheden gevraagd worden. De afgelopen jaren is er veel gesproken over de omvang van de bedreiging die robots vormen voor de banen van menselijke werknemers en een recent onderzoek van McKinsey & Company gooit nog meer olie op het vuur. Volgens schattingen van het Amerikaanse consultancykantoor zal op korte termijn tot wel 51% van al het werk in de Verenigde Staten zwaar worden getroffen door robotisering en AI-technologie. 

    Analyzing work activities

    Het onderzoek, dat is gebaseerd op een analyse van meer dan 2.000 werk-gerelateerde activiteiten in de VS in meer dan 800 arbeidsfuncties, suggereert dat voorspelbaar fysiek werk in relatief stabiele omgevingen de grootste kans loopt om te worden overgenomen door robots of een andere vorm van automatisering. Voorbeelden van dit soort omgevingen zijn onder meer de accommodatie en horecabranche, de maakindustrie en de retailsector. Vooral in de maakindustrie zijn de mogelijkheden voor robotisering groot – ongeveer een derde van al het werk in de sector kan als voorspelbaar worden beschouwd. Kijkend naar de huidige automatiseringstechnologie zou tot wel 78% van dit werk kunnen worden geautomatiseerd.

    Maar het is echter niet alleen simpel productiewerk dat kan worden geautomatiseerd, aangezien ook werk op het gebied van dataverwerking en dataverzameling met de huidige technologie al kan worden gerobotiseerd. Volgens berekeningen van McKinsey kan tot wel 47% van de taken van een retail salesmedewerker op dit gebied worden geautomatiseerd – al ligt dit nog altijd veel lager dan de 86% automatiseringspotentie in het data-gerelateerde werk van boekhouders, accountants en auditors. 

    Automation is technically feasible

    In het onderzoek werd ook in kaart gebracht welke functies de meeste potentie voor automatisering hebben. Onderwijsdiensten en management lijken, kijkend naar de huidige technologie, de vakgebieden die het minst getroffen zullen worden door robotisering en AI-technologie. Vooral in het onderwijs zijn de percentages automatiseerbare taken laag, met weinig dataverzameling, -verwerking en voorspelbaar fysiek werk. Managers kunnen wel enige automatisering verwachten in hun werk, vooral op het gebied van dataverwerking en verzameling. In de bouw en landbouwsector is er sprake van veel werk dat als onvoorspelbaar kan worden beschouwd. De onvoorspelbare aard van deze werkzaamheden beschermt arbeiders in deze segmenten, omdat deze taken minder eenvoudig te automatiseren zijn.

    McKinsey benadrukt dat de analyse zich richt op het vermogen van de huidige technologieën om taken van mensen over te nemen. Dat dit technologisch mogelijk is, betekent volgens het consultancybureau niet dat deze werkzaamheden ook daadwerkelijk zullen worden overgenomen door robots of intelligente technologie. In het onderzoek wordt namelijk geen rekening gehouden met de implementatiekosten van deze technologie, of naar de grenzen van automatisering. Daardoor zullen werknemers in bepaalde gevallen goedkoper en beter beschikbaar blijven dan een gerobotiseerd systeem.

    Met het oog op de toekomst, voorspellen de onderzoekers dat met de komst van nieuwe technologieën op het gebied van robotisering en kunstmatige intelligentie er ook meer taken geautomatiseerd kunnen worden. Vooral technologie die het mogelijk maakt om natuurlijke gesprekken te voeren met robots, waarbij de machines menselijke taal kunnen begrijpen en automatisch kunnen antwoorden, zal volgens de onderzoekers een grote impact hebben op de mogelijkheden voor verdere robotisering.

    Bron: Consultancy.nl, 3 oktober 2016


  • SAS: 4 real-world artificial intelligence applications

    SAS: 4 real-world artificial intelligence applications

    Everyone is talking about AI (artificial intelligence). Unfortunately, a lot of what you hear about AI in movies and on the TV is sensationalized for entertainment.

    Indeed, AI is overhyped. But AI is also real and powerful.

    Consider this: engineers worked for years on hand-crafted models for object detection, facial recognition and natural language translation. Despite honing those algorithms by the best of our species, their performance does not come close to what data-driven approaches can accomplish today. When we let algorithms discover patterns from data, they outperform human coded logic for many tasks, that involve sensing the natural world.

    The powerful message of AI is not that machines are taking over the world. It is that we can guide machines to generate tremendous value by unlocking the information, patterns and behaviors that are captured in data.

    Today I want to share four real-world applications of SAS AI and introduce you to five SAS employees who are working to put this technology into the hands of decision makers, from caseworkers and clinicians to police officers and college administrators.

    Augmenting health care with medical image analysis

    Fijoy Vadakkumpadan, a Senior Staff Scientist on the SAS Computer Vision team, is no stranger to the importance of medical image analysis. He credits ultrasound technology with helping to ensure a safe delivery of his twin daughters four years ago. Today, he is excited that his work at SAS could make a similar impact on someone else’s life.

    Recently, Fijoy’s team has extended the SAS Platform to analyze medical images. The technology uses an artificial neural network to recognize objects on medical images and thus improve healthcare.

    Designing AI algorithms you can trust

    Xin Hunt, a Senior Machine Learning Developer at SAS, hopes to have a big impact on the future of machine learning. She is focused on interpretability and explainability of machine learning models, saying, 'In order for society to accept it, they have to understand it'.

    Interpretability uses a mathematical understanding of the outputs of a machine learning model. You can use interpretability methods to show how the model reacts to changes in the inputs, for example.

    Explainability goes further than that. It offers full verbal explanations of how a model functions, what parts of the model logic were derived automatically, what parts were modified in post-processing, how the model meets regulations, and so forth.

    Making machine learning accessible to everyone

    From exploring and transforming data to selecting features and comparing algorithms, there are multiple steps to building a machine learning model. What if you could apply all those steps with the click of a button?

    That’s what the development teams of Susan Haller and Dragos Coles have done. Susan is the Director of Advanced Analytics R&D and Dragos is a Senior Machine Learning Developer at SAS. They are showing a powerful tool that offers an API for a dynamic, automated model building. The model is completely transparent, so you examine and modify it after it is built.

    Deploying AI models in the field

    You can do everything right when building and refining a machine learning model, but if you do not deploy it where decisions are made it will not do any good.

    Seb Charrot, a Senior Manager in the Scottish R&D Team, enjoys deploying analytics to solve real problems for real people. He and his team build SAS Mobile Investigator, an application that allows caseworkers, investigators and officers in the field to receive tasks, be notified of risks and concerns regarding their caseload or coverage area, and raise reports on the go.

    Moving AI into the real world

    When you move past the science project phase of analytics and build solutions for the real world, you will find that you can enable everyone, not just those with data science degrees, to make decisions based on data. As a result, everyone’s jobs become easier and more productive. Plus, increased access to analytics leads to faster and more reliable decisions. Technology is unstoppable, it is who we are, it is what we do. Not just at SAS, but as a species.

    Author: Oliver Schabenberger

    Source: SAS

  • Should we fear Artificial Intelligence

    should we fear AIIf you watch films and TV shows, in which AI has been exploited to create any number of apocalyptic scenarios, the answer might be yes. After watching Blade Runner or The Matrix or, as a more recent example, Ex Machina, it’s easier to understand why AI touches off visceral reactions in the layman.

    It’s no secret that automation has posed a real threat to lower-skilled workers in blue collar industries, and that has grown into a fear of all forms of artificial intelligence. But a lot of complexities stand between where we are today and production AI, particularly the struggle to bridge the AI chasm. In other words, the type of AI Hollywood suggests we should fear, taking our jobs and possibly more, is a long way off.

    At the other end of the pop culture spectrum, we have people who have embraced AI as the future of mankind. Google’s chief futurist Ray Kurzweil is a great example of thinkers who have championed AI as the next step in the evolution of human intelligence. So which version is our AI future?

    The truth is likely somewhere in the middle. Artificial intelligence won’t compete against humans with extinction-level stakes à la Terminator, at least in forthcoming years; nor will it transcend us as Kurzweil suggests. The likeliest outcome in the near future is we carve out symbiotic roles for the two, because of their respective shortcomings.

    While many people expect all AI they interact with to pass the Turing test, the human brain is the most advanced machine we know of. Thanks to emotional intelligence, humans can interpret and adapt in real time to changing circumstances, and react differently to the same stimuli. Humans and their emotional intelligence make it tough for AI to be benchmarked.

    We are all talking about Amazon Go, Amazon’s attempt to bring its website to life in fully automated 3D retail centers. But who will customers talk to when an item is missing or a mistake is made in billing? We want human interactions, like a conversation with the neighborhood baker (if you’re French like me) or the opinion of a salesperson on the fit of a jacket. Now we also want efficiency, but not to the exclusion of adaptable and sympathetic emotional intelligence. 

    In some situations, efficiency and safety are preferred over empathy, or creativity. For instance, many favor of the delegation of hazardous tasks in factories or oilfields to machines, letting humans handle higher level strategic tasks like managing employees or drawing on both the left and right brain to flesh out designs.

    The world is becoming a more complex place and we can welcome more AI to help us navigate it. Consider the accelerating advance of research in many scientific fields, making staying an expert even in a well-defined field a real challenge. The issue is not just that your field is growing, but that it touches on and draws from many other fields that are growing as well. As a result, knowledge bases are growing exponentially.

    A heart surgeon faced with a tough choice may consult a few books or a couple of experts and then identify patterns and weight different outcomes to make a decision. Instead, they could draw on an AI to assimilate the knowledge base to reach a logical decision from a truly holistic standpoint. This does not guarantee that it will be the right answer. Machine Learning can help the surgeon weigh thousands of similar cases, consider every medical angle, and even cross-reference the patient’s family history. The surgeon could even cover all this ground in less time than it would have taken to page through books or call advisors. But the purely logical decision should not be the right and final decision. Doing the right thing is different that having highest probability of success, and so the surgeon will have to consider empathy for the family, the quality of living of the patient, and many other emotional factors.

    For now, machine learning is the most straightforward AI component to implement, and the one critical to improving the human condition. ML limits AI outputs to assimilating large quantities of data and defining patterns, but it acknowledges that AI cannot evaluate complex, novel, or emotional variables and leaves multidimensional decision making to humans. 

    As researchers and futurists struggle to bring true AI to the masses, it will be a progressive transition. What I am interested to see is whether or not a rapid transition could trigger a generational clash.

    Just like pre-Internet and post-Internal generations, will be see a pre-AI and post-AI ones? If that’s the case, as with many technologies, the last generation to fear it may raise the first generation to embrace it.

    Author: Isabelle Guis 

  • Successfully implementing AI into practice

    Artificial Intelligence (AI) can be a real value driver for organizations. As the power of algorithms, computing and amounts of data surge, companies within manufacturing and industry start to see an increasing amount of use cases. These systems could drive efficiency and enhance capability. But also automatize tasks, decrease costs and improve revenue.

    Success and value generated by AI benefit from a good understanding and expectation of what the technology can deliver from the C-suite down. Organizations in general should also have a well-considered implementation process. This concludes IBM in the recently published white paper on AI. ‘Beyond the hype: A guide to understanding and successfully implementing artificial intelligence within your business’.

    Putting AI into practice: specific tasks

    AI is not about sentient robots and magic boxes. AI is a science and a set of computational technologies. These are inspired by the ways people use their nervous systems and their bodies to sense, learn, reason and take action. But typically operate quite differently. AI encompasses machine learning (machines that can learn from data – algorithms adjusting themselves) and deep learning (a combination of algorithms that are mutually linked).

    Within AI data scientists extract knowledge and interpret data by using the right tools and statistical methods. The machines learn to recognize patterns in the data that it is fed to them. And map these patterns to future outcomes.


    Relevant AI use cases span various areas across virtually every industry. But there are three main macro domains that continue to drive the adoption as well as the most economies across businesses. Cognitive engagement involves how to deliver new ways for humans to engage with machines. Cognitive insights and knowledge addresses how to augment humans who are overwhelmed with information and knowledge. And cognitive automation relates to move from process automation to mimicking human intelligence, to facilitate complex and knowledge-intense business decisions.

    Below are some examples of successful implementations within the industrial and manufacturing domain:

    • Using the many different available sensor measurements from large truck engines, a neural network at a manufacturer is trained to recognize normal and abnormal engine behavior. The model is able to detect when specific measurements were out of the ordinary. Anomalous sensor readings are highly predictive of pending engine failures.
    • At a car manufacturer through supervised learning techniques predictive models were developed that could provide an early warning of failure based on the different system messages and sensor readings that continuously stream from the production line. This early warning could be used to prioritize maintenance and reduce both downtime as well as false positives and needless efforts.
    • The output of machine learning-based predictive models with prescriptive, mathematical optimization models was used at a utility company to prescribe the optimal mix of power production sources to meet predicted demand and to minimize costs. This required both the prediction of demand as well as prediction of available solar and wind energy capacity.
    • To understand the business dynamics and create inventory of possibly relevant data sources a material producer used machine learning models to learn the price behavior and forecast future price development. The models also enabled buyers to evaluate their own ‘what if’ scenarios. This all came together for the user in an interactive dashboard.

    There are three main steps to implement AI:

    1. -Develop an AI strategy and roadmap
    2. -Establish AI capabilities and skills
    3. -Start small and scale quickly

    In the previously mentioned white paper IBM provides some practical recommendations to avoid frequent pitfalls such as cultural or managerial resistance, bad or insufficient data, too high or low expectations, lack of capabilities et cetera.

    Based on its experience and knowledge, IBM can help to successfully implement AI and guide organizations in the transformation to Industry 4.0. IBM enables companies to experiment with big ideas, acquire new expertise and build new enterprise-grade solutions for immediate market impact. It gives companies the speed of a start-up, at the scale and rigor of an enterprise.

    Author: Marloes Roelands

    Source: IBM

  • The 8 most important industrial IoT developments in 2019

    The 8 most important industrial IoT developments in 2019

    From manufacturing to the retail sector, the infinite applications of the industrial internet of things (IIoT) are disrupting business processes, thereby improving operational efficiency and business competitiveness. The trend of employing IoT-powered systems for supply chain management, smart monitoring, remote diagnosis, production integration, inventory management, and predictive maintenance is catching up as companies take bold steps to address a myriad of business problems.

    No wonder, the global technology spend on IoT is expected to reach USD 1.2 trillion by 2022. The growth of this segment will be driven by firms deploying IIoT solutions and giant tech organizations who are developing these innovative solutions.

    To help you stay ahead of the curve, we have enlisted a few developments that will dominate the industrial IoT sphere.

    1. Cobots are gaining popularity

    Digitization is having a major impact in the industrial robotics segment as connected cobots or collaborative robots, making their place in the smart manufacturing ecosystem. This trend is improving the efficiency of operations and the reliability of the production cycle.

    IIoT is making robots mobile and collaborative, offering technologies, such as self-driving vehicles (mobile collaborative robots), machine vision (part identification), and additive manufacturing that can boost production efficiency and business growth with an excellent ROI. No wonder, the global cobots market size has crossed USD 649 million in 2018 and is expected to expand at a CAGR of 44.5% between 2019 and 2025.

    2. Digital twins are on the rise

    A growing number of firms are deploying IoT solutions to develop a digital replica of their business assets. Thus, instead of sending data to each physical receiver separately, all the information is sent to the digital twin, enabling business units to access the data with ease.

    Digital twins are growing in popularity as they decrease the complexity of the IoT ecosystem while boosting its efficiency. Gartner shares that 24% of enterprises are already using digital twins and an additional 42% plan to ride on this wave in the coming three years.

    Smart businesses are already using digital twin software to incorporate process data, enabling them to reach accurate insights and address operational inefficiencies.

    3. Augmented reality is disrupting the manufacturing domain

    AR is benefiting the manufacturing domain in more ways than one. The technology has disrupted the manufacturing areas like product design and development, maintenance and field service, quality assurance, logistics, and hands-on training of new employees.

    For instance, in the assembling operations, AR is replacing the traditional paper instruction manual with IoT-enabled systems that have voice-controlled instructions along with a video from the previous assembly operation.  

    AR is also allowing manufacturing technicians to have access to instant intelligence and problem insights related to maintenance, thereby improving their efficiency and reducing equipment downtime.

    4. IoT-enabled predictive maintenance is becoming a part of the overall maintenance workflow

    With the advent of Industry 4.0, several enterprises are investing in IoT-enabled predictive maintenance of their assets to fix automated systems before they get disabled. In today’s competitive business environment, it is extremely important for firms to keep machines running seamlessly. Connected sensors and machine learning are helping companies anticipate component failures in advance, thereby reducing equipment downtime and time to locking up machines for preventative maintenance checks.

    As a result, many organizations are running predictive analytics and machine learning to monitor systems and gather data, allowing them to estimate when components are likely to fail.

    5. 5G will drive real-time IIoT applications

    5G deployments are digitizing the industrial domain and changing the way enterprises manage their business operations. Industries, namely transportation, manufacturing, healthcare, energy and utilities, agriculture, retail, media, and financial services will benefit from the low latency and high data transfer speed of 5G mobile networks.

    For instance, in the manufacturing domain, 5G will power factory automation, ensuring that the processes happen within the time frame, thereby reducing the risk of downtime. Further, 5G will help manufacturers in real-time production inspection and assembly line maintenance.

    6. Firms are shifting from centralized cloud to edge computing

    Until now, the centralized cloud was a popular choice among firms for controlling connected devices and data. However, with IoT devices and sensors expected to generate an ocean of data, more and more enterprises want IoT to monitor and report data and events remotely.

    Though most firms are using centralized cloud-based solutions to collect data, they are facing issues, such as high network load, poor response time, and security risks. Edge computing is helping businesses collect, analyze, and store data close to its source, thereby reducing the costs and security risks and improving system efficiency. That explains the growing demand for edge computing.

    A research report from Business Insider Intelligence forecasts that by 2020, there will be over 5,635 million smart sensors and other IoT devices globally, generating over 507.5 zettabytes of data. The need to collect and process this data at local collection points is what’s triggering the shift from centralized cloud to edge computing.

    7. Firms will continue to invest in cybersecurity

    Cybersecurity threats continue to evolve each day. Connected systems pose a serious threat to data and cause massive system disruption and loss to the firm. A 2018 Data Breach study by IBM revealed that the cost of an average data breach to companies globally is USD 3.86 million.

    As a result, an increasing number of firms are investing in innovative services like virtual private network (VPN) to access the internet safely. Such innovative security solutions are becoming increasingly popular with enterprises across domains.

    8. IoT analytics is gaining significance

    While sectors such as manufacturing, aerospace, and energy and utilities are deploying IoT-powered sensors and wireless technologies, the true value of industrial IoT lies in analytics. The connected systems generate a large amount of data that needs to be effectively employed to optimize operations. Thus, the demand for  IoT analytics will rise in the coming years. As a result, firms will have to depend on AI and ML technologies to find and effective ways to manage the data overload.

    Companies like SAS, SAP, and Teradata are already offering advanced analytics software to help enterprises evaluate real-time data streaming from connected systems on the shop floor.

    Going forward

    IIoT is all set to fuel the fourth industrial revolution. Firms across various industries are adopting innovative IoT devices and technologies to accelerate business growth. These IIoT deployments will help enterprises improve operational efficiency, reduce downtime, and get a serious competitive advantage in their respective domains.

    The IIoT developments shared in this post will set the stage for innovative enterprise platforms and tech advancements. Organizations wanting to remain competitive should be not only aware of these trends but also take adequate measures to embrace them.

    Source: Datafloq

  • The ability to speed up the training for deep learning networks used for AI through chunking

    The ability to speed up the training for deep learning networks used for AI through chunking

    At the International Conference on Learning Representations on May 6, IBM Research shared a look around how chunk-based accumulation can speed the training for deep learning networks used for artificial intelligence (AI)

    The company first shared the concept and its vast potential at last year’s NeurIPS conference, when it demonstrated the ability to train deep learning models with 8-bit precision while fully preserving model accuracy across all major AI data set categories: image, speech and text. The result? This technique could accelerate training time for deep neural networks by two to four times over today’s 16-bit systems.

    In IBM Research’s new paper, titled 'Accumulation Bit-Width Scaling For Ultralow Precision Training of Deep Networks', researchers explain in greater depth exactly how the concept of chunk-based accumulation works to lower the precision of accumulation from 32-bits down to 16-bits. 'Chunking' takes the product and divides it into smaller groups of accumulation and then adds the result of each of these smaller groups together, leading to a significantly more accurate result than that of normal accumulation. This allows researchers to study new networks and improve the overall efficiency of deep learning hardware.

    Although this approach was previously considered infeasible to further reduce precision for training, IBM expects this 8-bit training platform to become a widely adopted industry standard in the coming years.

    Author: Daniel Gutierrez

    Source: Insidebigdata

  • The big data race reaches the City

    coloured-high-end-data-cables-large transEduPGWXTgvtbFyMaMlYatm4ovIMMP 5WSTNAIgCzTy4

    Vast amounts of information are being sifted for the good of commercial interests as never before

    IBM’s Watson supercomputer, once known for winning the television quiz show Jeopardy! in 2011, is now sold to wealth management companies as an affordable way to dispense investment advice. Twitter has introduced “cashtags” to its stream of social chatter so that investors can track what is said about stocks. Hedge funds are sending up satellites to monitor crop yields before even the farmers know how they’re doing.

    The world is awash with information as never before. According to IBM, 90pc of all existing data was created in the past two years. Once the preserve of academics and the geekiest hedge fund managers, the ability to harness huge amounts of noise and turn it into trading signals is now reaching the core of the financial industry.

    Last year was one of the toughest since the financial crisis for asset managers, according to BCG partner Ben Sheridan, yet they have continued to spend on data management in the hope of finding an edge in subdued markets.

    “It’s to bring new data assets to bear on some of the questions that asset managers have always asked, like macroeconomic movements,” he said.

    “Historically, these quantitative data aspects have been the domain of a small sector of hedge funds. Now it’s going to a much more mainstream side of asset managers.”

    59823675 The headquarters of HSBC Holdings Plc left No 1 Canada Square or Canary Wharf Tower cen-large transgsaO8O78rhmZrDxTlQBjdEbgHFEZVI1Pljic pW9c90 
    Banks are among the biggest investors in big data

    Even Goldman Sachs has entered the race for data, leading a $15m investment round in Kensho, which stockpiles data around major world events and lets clients apply the lessons it learns to new situations. Say there’s a hurricane striking the Gulf of Mexico: Kensho might have ideas on what this means for US jobs data six months afterwards, and how that affects the S&P stock index.

    Many businesses are using computing firepower to supercharge old techniques. Hedge funds such as Winton Capital already collate obscure data sets such as wheat prices going back nearly 1,000 years, in the hope of finding patterns that will inform the future value of commodities.

    Others are paying companies such as Planet Labs to monitor crops via satellite almost in real time, offering a hint of the yields to come. Spotting traffic jams outside Wal-Marts can help traders looking to bet on the success of Black Friday sales each year – and it’s easier to do this from space than sending analysts to car parks.

    Some funds, including Eagle Alpha, have been feeding transcripts of calls with company executives into a natural language processor – an area of artificial intelligence that the Turing test foresaw – to figure out if they have gained or lost confidence in their business. Trades might have had gut feelings about this before, but now they can get graphs.

    biggest spenders

    There is inevitably a lot of noise among these potential trading signals, which experts are trying to weed out.

    “Most of the breakthroughs in machine-learning aren’t in finance. The signal-to-noise ratio is a problem compared to something like recognising dogs in a photograph,” said Dr Anthony Ledford, chief scientist for the computer-driven hedge fund Man AHL.

    “There is no golden indicator of what’s going to happen tomorrow. What we’re doing is trying to harness a very small edge and doing it over a long period in a large number of markets.”

    The statistics expert said the plunging cost of computer power and data storage, crossed with a “quite extraordinary” proliferation of recorded data, have helped breathe life into concepts like artificial intelligence for big investors.

    “The trading phase at the moment is making better use of the signals we already know about. But the next research stage is, can we use machine learning to identify new features?”

    AHL’s systematic funds comb through 2bn price updates on their busiest days, up from 800m during last year’s peak.

    Developments in disciplines such as engineering and computer science have contributed to the field, according to the former academic based in Oxford, where Man Group this week jointly sponsored a new research professorship in machine learning at the university.

    google-driverless 3147440b 1-large transpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa RXJU8
    The artificial intelligence used in driverless cars could have applications in finance

    Dr Ledford said the technology has applications in driverless cars, which must learn how to drive in novel conditions, and identifying stars from telescope images. Indeed, he has adapted the methods used in the Zooniverse project, which asked thousands of volunteers to help teach a computer to spot supernovae, to build a new way of spotting useful trends in the City’s daily avalanche of analyst research.

    “The core use is being able to extract patterns from data without specifically telling the algorithms what patterns we are looking for. Previously, you would define the shape of the model and apply it to the data,” he said.

    These technologies are not just been put to work in the financial markets. Several law firms are using natural language processing to carry out some of the drudgery, including poring over repetitive contracts.

    Slaughter & May has recently adopted Luminance, a due diligence programme that is backed by Mike Lynch, former boss of the computing group Autonomy.

    Freshfields has spent a year teaching a customised system known as Kira to understand the nuances of contract terms that often occur in its business.

    Its lawyers have fed the computer documents they are reading, highlighting the parts they think are crucial. Kira can now parse a contract and find the relevant paragraphs between 40pc and 70pc faster than a human lawyer reviewing it by hand.

    “It kicks out strange things sometimes, irrelevancies that lawyers then need to clean up. We’re used to seeing perfect results, so we’ve had to teach people that you can’t just set the machine running and leave it alone,” said Isabel Parker, head of innovations at the firm.

    “I don’t think it will ever be a standalone product. It’s a tool to be used to enhance our productivity, rather than replace individuals.”

    The system is built to learn any Latin script, and Freshfields’ lawyers are now teaching it to work on other languages. “I think our lawyers are becoming more and more used to it as they understand its possibilities,” she added.

    Insurers are also spending heavily on big data fed by new products such as telematics, which track a customer’s driving style in minute detail, to help give a fair price to each customer. “The main driver of this is the customer experience,” said Darren Price, group chief information officer at RSA.

    The insurer is keeping its technology work largely in-house, unlike rival Aviva, which has made much of its partnerships with start-up companies in its “digital garage”. Allianz recently acquired the robo-adviser Moneyfarm, and Axa’s venture fund has invested in a chat-robot named Gasolead.

    EY, the professional services firm, is also investing in analytics tools that can flag red flags for its clients in particular countries or businesses, enabling managers to react before an accounting problem spreads.

    Even the Financial Conduct Authority is getting in on the act. Having given its blessing to the insurance sector’s use of big data, it is also experimenting with a “sandbox”, or a digital safe space where their tech experts and outside start-ups can use real-life data to play with new ideas.

    The advances that catch on throughout the financial world could create a more efficient industry – and with that tends to come job cuts. The Bank of England warned a year ago that as many as 15m UK jobs were at risk from smart machines, with sales staff and accountants especially vulnerable.

    “Financial services are playing catch-up compared to some of the retail-focused businesses. They are having to do so rapidly, partly due to client demand but also because there are new challengers and disruptors in the industry,” said Amanda Foster, head of financial services at the recruiter Russell Reynolds Associates.

    But City firms, for all their cost pressures, are not ready to replace their fund managers with robots, she said. “There’s still the art of making an investment decision, but it’s about using analytics and data to inform those decisions.”

    Source: Telegraph.co.uk, October 8, 2016



  • The increasing impact of AI on cinemas

    The increasing impact of AI on cinemas

    Artificial Intelligence (AI) has become a giant in the tech industry and is transforming the workforce as we know it in all kinds of ways. Everything from transportation manufacturers to home appliance companies is using machine learning to streamline everyday activities.

    Less publicized is the movie theater industry, which is gaining ground that they previously lost to streaming services by using AI and similar technology like machine learning to their advantage. They have learned to adapt to this technology, and are revolutionizing their marketing to bring viewers into theater seats. As a matter of fact, there are a number of film studios that are experimenting with AI. Notably, movie theaters have taken this opportunity to physically bring people to their theaters rather than count them lost. The movie theater isn’t dead. It’s just transforming, and it’s doing so with the help of AI.

    Personalized advertisement

    A large appeal for the use of AI in movie marketing is personalization, which isn’t too surprising. AI really shines in analytics and compiling data about customer decisions and trends, so it’s natural that a giant industry like the film industry would utilize it to understand and communicate with their customers. The change from previous forms of movie marketing, however, is how exactly they reach those individuals.

    This concept begins with personalized advertisements. The movie advertisements you get on streaming services are being sent to you personally because AI has determined you will enjoy the movie in question. Furthermore, AI will be directing ads with price incentives for movies or concessions at customers based on how likely they are to see a certain film.

    “Giving movie-goers the opportunity to buy a ticket in advance for them and three friends might be the best way to go,"Movio Chief Executive Will Palmer told Indiewire. "On the other end of the spectrum, you might have this ‘least likely’ group, and you’ve got to make a decision: do I leave that group alone or do I activate that group? That might be a case of putting some form of price-based incentive or concession-based incentive to try to attract that group."

    The idea is that people can buy tickets in advance, as well as concessions. They will be offered discounts when they’re promoted movies that AI thinks they will enjoy based on past experience and purchases. These things are all offered on an individual basis, and tailored specifically to individuals due to data gathered by AI analytics.

    Customer service

    AI help desks and virtual assistants are being used in several industries that depend on customer care for their income and revenue. But these same bots are also beginning to allow people to order concessions before they even get to the cinema. Imagine how this might change the movie-going experience.

    For instance, think about all the times you have waited in line for popcorn or drinks, and how the person ahead of you may not have known what they wanted. If you’ve ever been late to a movie due to prior responsibilities, this is particularly frustrating. But imagine if you could order that food with your ticket. You could just walk up, grab your order, and head into the movie without waiting for people to make up their minds. This makes the movie-going experience much more efficient, with less waiting and better delivery.

    Additionally, some apps are teaming up with movie theaters to replace the app MoviePass, an app service in which moviegoers paid $10 for a month and were able to see one movie a day for the entire month with no extra charge. Unfortunately, this ended up being unaffordable. And while Moviepass still exists, it’s much more selective with which theaters and movies it works with.

    Some developers have been working to create similar apps with more practical operations and replacing the ticket-buying process altogether. Regarding an app called Sinemia, The Verge summarized that: “What the app offers is access to any movie at any time with no blackouts and no theater restrictions whatsoever.” “Sinemia basically loads the funds onto your own personal debit card with the cash necessary to purchase your ticket, and then you’re good to go”, they added.

    Obviously, this is a fun way to get people who don’t normally go to movies to see a few each month, raising ticket sales, and it could make things easier on the theaters as well. Ticket purchasing and payments are totally streamlined with Sinemia. We will probably see more of this in the future.

    Is AI good or bad for the movie industry?

    AI is causing a lot of public concern because of the fear it’s rendering jobs previously done by humans as useless. Take those at the ticket booth for instance. With some of the aforementioned apps, they could be out of the job. However, AI may be the film industry’s only chance to adapt and survive in the future. In addition to that, right now it looks like AI is actually creating jobs as opposed to killing them. This means that employees and employers have to be able to adapt their skill sets into the new context, which some do not know how to do.

    From that angle, AI is good for the movie industry. As Fast Company reported, technology like AI is literally helping design storylines and is being used to monitor which movies evoke which emotions in viewers. By using AI’s data to monitor viewer responses, films are being catered better to consumers.

    In fact, theater and acting are using AI to move into the future in general. As we already know, AI has been making an appearance in traditional acting experiences as well, making interactive theater art pieces a popular experience. So AI isn’t killing the film and entertainment industries, it’s saving them. And the human element doesn’t have to be removed if humans learn how to use it. The movie-going experience is improving and will continue to thrive if those in command keep using new technology to their benefit. Don’t think of AI as the enemy, think of it as a tool we can use to enjoy movies in different (more efficient) ways.

    Source: Datafloq

  • The massive impact of data science on the web development business

    The massive impact of data science on the web development business

    “A billion hours ago, modern Homo sapiens emerged.
    A billion minutes ago, Christianity began.
    A billion seconds ago, the IBM personal computer was released.
    A billion Google searches ago… was this morning."

    - Hal Varian, Google’s Chief Economist, December, 2013 (From the book: Work Rules by Laszlo Buck)

    The last line of the above quote characterizes the world’s hunger for information. Information plays a huge role in our life. Information consumed by our senses helps our mind in making decisions. But what happens when the mind is flooded with information? You get confused, annoyed and scared of decision-making. This is where your computers and processors come to rescue, and this is when the term 'information' is replaced by 'data'.

    Every minute, more than a hundred hours of video content is uploaded on YouTube. From application stores, over 50 billion apps have already been downloaded since 2008. There are more than 2 billion people signed up on social media websites. These numbers are just giving you a glimpse of the amount of data which is flowing through the optical fibers every second around the world. And now the question comes: how to make this massive amount of data useful? The answer is analytics. If you know how to play with numbers and extract the nectar of useful insights from this huge amount of data using appropriate analytical tools, then you are my friend, are a real data scientist.

    Data science is helping many businesses, irrespective of them being B2B or B2C. But in this article, we are going to talk more about its role in one of the biggest B2B industries: Custom Web Development. If you are a web developer, you must not ignore the rise of data science in your profession, and if you are thinking about hiring one, then you should know about the latest trends to supervise the development process in a better way. So, let’s discuss the impact of data science in the transformation of web development:

    1. Re(de)fining the software solutions

    Not a very long time ago, web developers used to be creative with page layouts and menu details. It was generally guess-work, but now data science tells web developers about the layouts and details of the competitor websites. Hence, they can propose a unique design after carefully evaluating the competition.

    Also with the help of the latest analytical tools, web developers can know what the requirements of the end users are. They can suggest particular functions or features which are popular among the customers based on the analysis of consumer data. In this way, data science is assisting the developers in providing better and faster software solutions to their clients.

    2. Automatic updates

    Gone are the days when updates had to be manually administered by the developers. This is the era of automation. Machine learning has enabled tools to analyze consumer behavior and data available on social media platforms to come up with required updates. The websites are made self-learning so that they can improve themselves with the changing demands of the customers. It is possible only because data science is doing its job perfectly.

    Although this part is still facing some challenges with creating customized solutions for different clients, but soon custom web development services will make it a piece of cake with the help of data science.

    3. Customizing for end users

    We have discussed until now that how web development can be customized for the clients using data science, but the real goal should be the satisfaction of end users. And satisfaction is a dependent variable of personalization. To create a personalized product for the users, you need to know them, and in this regard data science is helping web developers.

    The spending habits, interest areas, preferred websites, geographical location, age, and gender, etc. all this information of the end users are used to create algorithmic models which can predict the consumer’s alignment towards your web apps. Using these models, you can not only give the user a personalized experience on the website but also strategically place your ads targeting specific customer segments, thus, creating a win-win situation for both buyer and seller.

    4. Changing hot-skills

    Apart from changing the way the web is being designed by developers, data science is influencing the transformation of web development in one more way: by revolutionizing the job market. With ever-changing needs of the industry, a web development company wants employees equipped with the skills of using the latest data and analytics tools.

    The developers looking for jobs today are expected to have knowledge of tools like python and google analytics. They are asked about their proficiency in creating AI and ML programs in their interviews. Therefore, one has to stay updated to stay relevant.

    5. Customer’s expectations

    Do you get irritated when the Uber’s driver calls you to ask about your pick-up location when it can be easily tracked by the GPS and clearly displayed on his device’s screen? Won’t you feel uncomfortable if you misspell something while typing on your messenger and autocorrect stops helping? And don’t you feel nice when you buy a phone online and the web app suggests your latest phone covers for it?

    Well, if the answer is yes, then you are becoming dependent on data science too. Don’t worr, you're not the only one. Customers worldwide like extra help provided by businesses. And this dependency on data will soon make the use of data science a hygiene factor in web development.


    Although it’s called Data Science, using it is nothing less than an art. It requires expertise and dedication to develop a web app which completely harnesses the potential of data science.

    Data science is a vast field. It is responsible for AI, machine learning, big data, analytics, etc. This also drives technologies such as the Internet of Things and AR/VR. Hence, when all the modern buzzwords of business are somewhere related to data science, it requiress absolute ignorance to neglect the role of data science in the development of websites and web apps.

    Source: Datafloq

  • The most wanted skills related for organizations migrating to the cloud

    The most wanted skills related for organizations migrating to the cloud

    Given the widespread move to cloud services underway today, it’s not surprising that there’s growing demand for a variety of cloud-related skills.

    Earlier this year, IT consulting and talent services firm Akraya Inc. compiled a list of the most in-demand cloud skills for 2019, let's take a look at them:

    Cloud security

    Cloud security is a shared responsibility between cloud providers and their customers. That creates a need for professionals with specialization in cloud security skills, including those who can leverage cloud security tools.

    Machine learning (ML) and artificial intelligence (AI)

    In recent years cloud vendors have developed and expanded their set of tools and services that allow organizations to reap the benefits of machine learning and artificial intelligence in the cloud. Companies need people who can leverage these new capabilities of the cloud.

    Cloud migration and deployment within multi-cloud environments

    Many organizations are looking to adopt multiple cloud services, and are looking for professionals who can contribute to their cloud migration efforts. Cloud migration has its risks and is not an easy process, and improper migration processes often lead to business downtime and data vulnerability. This means that employees with appropriate skillset are key.

    Serverless architecture

    Underlying cloud server infrastructure needs to be managed by cloud developers within a server-based architecture. But today’s cloud consists of industry standard technologies and programming languages that help move serverless applications from one cloud vendor to another, Akraya said. Companies need expertise in serverless application development.

    Author: Bob Violino

    Source: Information-management

  • The reinforcing relationship between AI and predictive analytics

    The reinforcing relationship between AI and predictive analytics

    Enterprises have long seen the value of predictive analytics, but now that AI (artificial intelligence) is starting to influence forecasting tools, the benefits may start to go even deeper.

    Through machine learning models, companies in retail, insurance, energy, meteorology, marketing, healthcare and other industries are seeing the benefits of predictive analytics tools. With these tools, companies can predict customer behavior, foresee equipment failure, improve forecasting, identify and select the best product fit for customers, and improve data matching, among other things.

    Enterprises of all sizes are now finding that the combination of predictive analytics and AI can help them stay ahead of their competitors.

    Forecasting gets a boost with AI

    Retail brands are constantly looking to stay relevant by associating themselves with the latest trends. Before each season, designers are continuously working on creating new styles and designs they think will be successful. However, these predictions can be faulty based on a number of factors, such as changes in customer buying patterns, changing tastes in particular colors or styles, and other factors that are difficult to predict.

    AI-based approaches to demand projection can reduce forecasting errors by up to 50%, according to Business of Fashion. This improvement can mean big savings for a retail brand's bottom line and positive ROI for organizations that are inventory-sensitive.

    Another industry that has seen tremendous improvements recently is meteorology and weather forecasting. Traditionally, weather forecasting has been prone to error. However, that is changing, as the accuracy of 5-day forecasts and hurricane tracking forecasts has improved dramatically in recent years.

    According to the Weather Channel, hurricane track forecasts are now more accurate five days in advance than two-day forecasts were in 1992. These extra few days can give people in a hurricane's path extra time to prepare and evacuate, potentially saving lives.

    Another example is the use of predictive analytics by utility companies to help spot trends in energy usage. Smart meters monitor activity and notify customers of consumption spikes at certain times of the day, helping them cut back on power usage. Utility companies are also helping customers predict when they might get a high bill based of a variety of data points and can send out alerts to warn customers if they are running up a large bill that month.

    Reducing downtime and disturbance

    For industries that heavily rely on equipment, such as manufacturing, agriculture, energy, mining etc., unexpected downtime can be costly. Companies are increasingly using predictive analytics and AI systems to help detect and prevent failures.

    AI-enabled predictive maintenance systems can self-monitor and report equipment issues in real time. IoT sensors attached to critical equipment can gather real-time data, spotting issues or potential problems as they arise and notifying teams so they can respond to them right away. The systems can also formulate predictions of upcoming issues, reducing costly unplanned downtime for instance.

    Power plants need to be monitored constantly to make sure they are functioning properly and safely, maing sure they are providing energy to the all the customers that rely on them for electricity. Predictive analytics is being used to help run early warning systems that can identify anomalies and notify managers of issues weeks to months earlier than traditional warning systems. This can lead to improved maintenance planning and more efficient prioritization of maintenance activities.

    Additionally, AI can help predict when a component or piece of equipment might fail, reducing unexpected equipment failure and unplanned downtime while also lowering maintenance costs.

    In industries which rely heavily on location data, such as mining, making sure you're operating in the correct area is paramount. Goldcorp, one of the largest gold mining companies in the world, partnered with IBM Watson to improve its targeting of new deposits of gold.

    By analyzing previously collected data, IBM Watson was able to improve geologists' accuracy of finding new gold deposits. Through the use of predictive analytics, the company was able to gather new information from existing data, better determine specific areas to explore next, and reach high-value exploration targets faster.

    Increased situational awareness

    Predictive analytics and AI are also great at anticipating situational events by collecting data from the environment and making decisions based on that data. This system is helping to predict future events based on data rather than just reacting to current data.

    Brands need to stay on top of their online presence, as well as what's being said about them on social media. Tracking social media to get real-time feedback from customers is important, especially for retail brands and restaurants. Bad reviews and negative comments can be detrimental, particularly for smaller brands.

    With this awareness and by tracking comments on social media in (near) real-time, companies can gather immediate feedback and respond to situations quickly. Situational awareness can also help with competition tracking, market awareness, market trend predictions and anticipated geopolitical problems.

    With companies of all sizes in every industry trying to stay ahead of their competitors and predict market trends, this forward-looking approach of predictive analytics is proving valuable. Predictive analytics is such a core part of AI application development that it is one of the core seven patterns of AI identified by AI market research and analysis firm Cognilytica.

    The use of machine learning to help give humans more data to make better decisions is compelling, and it's one of the most beneficial uses of machine learning technology.

    Author: Kathleen Walch

    Source: TechTarget

  • The status of AI in European businesses

    The status of AI in European businesses

    What is the future of AI (artificial intelligence) in Europe and what does it take to build an AI solution that is attractive to investors and customers at the same time? How do we reimagine the battle of 'AI vs Human Creativity' in Europe? 

    Is there any company that is not using AI or isn’t AI-enabled in some way? Whether it is startups or corporates, it is no news that AI is boosting digital transformation across industries at a global level and hence it has traction not only from investors but is also the focus of government initiatives across countries. But where does Europe stand with the US and China in terms of digitization and how collective effort could push AI as an important pan-European strategic topic? 

    First things first: According to McKinsey, the potential of Europe to deliver on AI and catch up against the most AI-ready countries such as the United States and emerging leaders like China is large. If Europe on average develops and diffuses AI according to its current assets and digital position relative to the world, it could add some €2.7 trillion, or 20%, to its combined economic output by 2030. If Europe was to catch up with the US AI frontier, a total of €3.6 trillion could be added to collective GDP in this period.

    What comprises the AI landscape and is it too crowded?

    I recently attended a dedicated panel on 'AI vs Human Creativity' as a part of the first day of the Noah conference 2019 in Berlin.  Moderated by Pamela Spence, Partner of Global Life Sciences, Industry leader EY, the discussion started with an open question on whether the AI landscape is too crowded? According to a report by EY, there are currently about 14,000 startups globally which can be associated with the AI landscape. But what does this mean when it comes to the nature of these startups? 

    Minoo Zarbafi, VP of Bertelsmann Investments Digital Partnerships, added perspective to these numbers: 'There are companies that are AI-enabled and then there are so-called AI-first companies. I differentiate because there are almost no companies today that are not using AI in their processes. From an investor perspective, we at Bertelsmann like AI-first companies which are offering a B2B (business-to-business platform solution to an unsolved problem. For instance, we invested in China in two pioneer companies in the domain of computer vision that are offering a B2B solution for autonomous driving'. Minoo added that from a partnership perspective Bertelsmann looks at AI companies that can help on the digital transformation journey of the company. 'The challenge is to find the right partner with the right approach for our use cases. And we actively seek the support of European and particularly German companies from the startup ecosystem when selecting our partners', she pointed out. 

    The McKinsey report too states that one positive point to note is that Europe may not need to compete head to head but rather in areas where it has an edge (such as in B2B and advanced robotics) and continue to scale up one of the world’s largest bases of technology developers into a more connected Europe-wide web of AI-based innovation hubs.

    Growing share of funding from Series A and beyond reflect increased maturity of the AI ecosystem in Europe. Pamela Spence from EY noted: 'One in 12 startups uses AI as a part of their product or services, up from 50 about six years ago. Startups labelled as being in AI attract up to 50% more funding than other technology firms. 40% of European startups that are claimed as AI companies actually don’t use AI in a way that is material to their business'.

    AI and human creativity go hand-in-hand

    Another interesting and important question is how far are we from the paradigm of clever thinking machines? Why should we be afraid of machines? Hans-Christian Boos, CEO & Founder of Arago, compares how machines were earlier supposed to do tasks which are too tedious or expensive and complex for humans. 'The principle of machine changes with AI. It used to earlier just automate tasks or standardise them. Now, all you need is to describe what you want as an outcome and the machine will find that outcome for you, that is a different ballgame altogether. Everything is result-oriented', he says.

    Minoo Zarbafi adds that as human beings, we have a limited capacity for processing information. 'With the help of AI, you can now digest much more information which may, combined with human creativity, cause you to find innovative solutions that you could not see before. One could say, the more complexity, the better the execution with AI. At Bertelsmann, our organisation is decentralised and it will be interesting to see how AI leverages operational execution'.  

    AI and the political landscape

    Why discuss AI when we talk about the digital revolution in Europe? According to the tech.eu report titled ‘Seed the Future:  A Deep Dive into European Early-Stage Tech Startup Activity’, it would be safe to say that Artificial Intelligence, Machine Learning and Blockchain lead the way in Europe. The European Commission has identified Artificial Intelligence as an area of strategic importance for the digital economy, citing it’s cross-cutting applications to robotics, cognitive systems and big data analytics. In an effort to support this, the Commission’s Horizon 2020 funding includes considerable funding AI, allocating €700M EU funding specifically.

    Chiara Sommer, Investment Director of Intel Capital, reflected on this by saying: 'In the present scenario, the implementation of AI starts with workforce automation with a focus on how companies could reduce cost and become more efficient. The second generation of AI companies focuses on how products can offer solutions and solve problems like never before. There are entire departments can be replaced by AI. Having said that, the IT industry adopts AI fastest, and then you have industries like healthcare, retail, a financial sector that follow'. 

    Why are some companies absorbing AI technologies while most others are not? Among the factors that stand out are their existing digital tools and capabilities and whether their workforce has the right skills to interact with AI and machines. Only 23% of European firms report that AI diffusion is independent of both previous digital technologies and the capabilities required to operate with those digital technologies; 64% report that AI adoption must be tied to digital capabilities, and 58% to digital tools. McKinsey reports that the two biggest barriers to AI adoption in European companies are linked to having the right workforce in place.

    It is certainly a collective effort of industries, the government, policy makers, corporates to have effective and impactful use of AI. Instead of asking how AI will change society Hans-Christian Boos rightly concludes: 'We should change the society to change AI'.

    Author: Diksha Dutta

    Source: Dataconomy

  • The three key challenges that could derail your artificaiI intelligence project

    BrainChip650It’s been abundantly clear for a while that in 2017, artificial intelligence (AI) is going to be front and center of vendor marketing as well as enterprise interest. Not that AI is new – it’s been around for decades as a computer science discipline. What’s different now is that advances in technology have made it possible for companies ranging from search engine providers to camera and smartphone manufacturers to deliver AI-enabled products and services, many of which have become an integral part of many people’s daily lives. More than that, those same AI techniques and building blocks are increasingly available for enterprises to leverage in their own products and services without needing to bring on board AI experts, a breed that’s rare and expensive.

    Sentient systems capable of true cognition remain a dream for the future.  But AI today can help organizations transform everything from operations to the customer experience. The winners will be those who not only understand the true potential of AI but are also keenly aware of what’s needed to deploy a performant AI-based system that minimizes rather than creates risk and doesn’t result in unflattering headlines.

    These are the three key challenges all AI projects must tackle:

    • Underestimating the time and effort it takes to get an AI-powered system up and running. Even if the components are available out of the box, systems still need to be trained and fine-tuned. Depending on the exact use case and requirements for accuracy, it can be anything between a few hours and a couple of years to have a new system up and running. That’s assuming you have a well-curated data set available; if you don’t, that’s another challenge.
    • AI systems are only as good as the people that program them and the data they feed them. It's also people who decide to what degree to rely on the AI system and when to apply human expertise. Ignoring this principle will have unintended, likely negative consequences and could even be the determinant between life and death. These are not idle warnings: We’ve already seen a number of well-publicized cases where training bias ended up discriminating against entire population groups, or image recognition software turned out to be racist; and yes, lives have already been put at risk by badly trained AI programs. Lastly, there’s the law of unintended consequences: people developing AI systems tend to focus on how they want the system to work, but not how somebody with criminal or mischievous intent could subvert it.
    • Ignore legal, regulatory and ethical implications at your peril. For example, you're at risk of breaking the law if the models you run take into consideration factors that mustn't be used as the basis for certain decisions (e.g., race, sex). Or you could find yourself with a compliance breach if you’re under obligation to provide an exact audit trail of how a decision was arrived at, but where neither the software nor its developers can explain how the result came about. A lot of grey areas surround the use of predictions when making decisions about individuals; these require executive level discussions and decisions, as does the thorny issue of dual-use.

    Source: forrestor.com, January 9, 2017

  • Toegang tot RPA-bots in Azure dankzij Automation Anywhere

    Toegang tot RPA-bots in Azure dankzij Automation Anywhere

    Automation Anywhere heeft het mogelijk gemaakt om toegang te krijgen tot zijn Robotic Process Automation (RPA)-bots vanuit Azure. Het bedrijf stelt dat er een uitgebreide samenwerking is opgezet met Microsoft, die gezamenlijke productintegratie, gezamenlijke verkopen en gezamenlijke marketing mogelijk moet maken. 

    Automation Anywhere koos daarnaast voor Azure als zijn cloud-provider, waardoor gezamenlijke klanten altijd en overal toegang hebben tot automatiseringstechnologie, schrijft idm. Organisaties kunnen het RPA-platform van Automation Anywhere op Azure, on premise en in een public of private cloud hosten.

    Alysa Taylor, Corporate Vice President van Cloud and Business bij Microsoft Business Applications and Industry, stelt dat de visie van Automation Anywhere overeenkomt met die van Microsoft. 'Dat is de visie om data en intelligence in al onze producten, applicaties, diensten en ervaringen te stoppen'. Volgens Mihir Shukla, CEO en mede-oprichter van Automation Anywhere, stelt de samenwerking bedrijven in staat om efficiënter te worden, de kosten via automatisering te verlagen en om werknemers de kans te geven om te focussen op wat ze het beste doen.

    Microsoft gaat op zijn beurt de automatiseringsproducten van Automation Anywhere uitlichten in zijn Executive Briefing Centres wereldwijd. Daardoor kunnen klanten hands-on demonstraties krijgen van Microsoft-producten, die mogelijk worden gemaakt door Automation Anywhere-technologie.


    Automation Anywhere kondigde in april aan ook een samenwerking te hebben opgezet met Oracle Integration Cloud. De twee bedrijven willen intelligente automatisering versnellen en de adoptie van door kunstmatige intelligentie aangedreven software-bots in de Integration Cloud mogelijk maken.

    Met het RPA-platform van Automation Anywhere moet het voor klanten van de Oracle Integration Cloud mogelijk worden om complexe zakelijke processen te automatiseren, zodat de werknemers zich op werk met meer waarde kunnen focussen. Ook moet de organisatorische efficiëntie verhoogd worden.

    Als onderdeel van de samenwerking wordt de enterprise RPA-platform connector voor de Integration Cloud beschikbaar. Oracle-klanten krijgen verder toegang tot de software-bots van Automation Anywhere, en de twee bedrijven werken samen aan extra bot-creaties speciaal voor Oracle. Die bot-creaties moeten beschikbaar worden in de Automation Anywhere Bot Store.

    Auteeur: Eveline Meijer

    Bron: Techzine

  • Top 10 big data predictions for 2019

    The amount of data that created nowadays is incredible. The amount and importance of data is ever growing, and with that the need for analyzing and identifying patterns and trends in data becomes critical for businesses. Therefore, the need for big data analytics is higher than ever. That raises questions about the future of big data. ‘In which direction will the big data industry evolve?’ 'What are the dominant trends for big data in the future?' While there are several predictions doing the rounds, these are the top 10 big data predictions that will most likely dominate the (near) future of the big data industry:

    1. An increased demand for data scientists

    It is clear that with the growth of data, the demand for people capable of managing big data is also growing. Demand for data scientists, analysts and data management experts is on the rise. The gap between the demand and availability of people who are skilled in analyzing big data trends is big and keeps getting bigger. It is up to you to decide if you wish to hire offshore data scientists/data managersor hire an in-house team for your business.

    2. Businesses will prefer algorithms over software

    Businesses prefer purchasing existing algorithms over creating their own. It gives them more customization options compared to a situation where they buy software. Software cannot be modified as per user requirements, rather businesses have to adjust as per the software.

    3. Businesses increase investments in big data

    IDC analysts predict that the investment in big data and analytics will reach $187 billion in 2019. Even though the big data investment from one industry to the other will vary, spending as a whole will increase. It is predicted that the manufacturing industry will experience the highest investment in big data, followed by healthcare and the financial industry.

    4. Data security and privacy will be a growing concern

    Data security and privacy have been the biggest challenges in the big data and internet of things (IoT) industries. Since the volume of data started increasing exponentially, the privacy and security of data have become more complex and the need to maintain high-security standards is becoming extremely important. If there is something that will impede the growth of big data, it is data security and privacyconcerns.

    5. Machine learning will be of more importance for big data

    Machine learning will be of paramount importance regarding big data. One of the most important reasons why machine learning will be important for big data is that it can be of huge help in predictive analysis and addressing future challenges.

    6. The rise of predictive analytics

    Simply put, predictive analytics can predict the future more reliably with the help of big data analytics. It is a highly sophisticated and effective way to gather market and customer information to determine the next actions of both consumer and businesses. Analytics provide depth in the understanding of futuristic behaviour.

    7. Chief Data Officers will have a more important role

    As big data becomes important, the role of Chief Data Officers will increase. Chief Data Officers will be able to direct functional departments with the power of deeply analysed data and in-depth studies of trends.

    8. Artificial Intelligence will become more accessible

    Without going in detail about how Artificial Intelligence becomes significantly important for every industry, it is safe to say that big data is a major enabler of AI. Processing large amounts of data to derive trends for AI and machine learning is possible. With cloud-based data storage infrastructure, parallel processing of big data is possible. Big data will make AI more productive and more efficient.

    9. A surge in IoT networks

    Smart devices are dominating our lives like never before. There will be an increase in the use of IoT by businesses and that will only increase the amount of data that is being generated. In fact, the focus will be on introducing new devices that are capable of collecting and processing data as quickly as possible.

    10. Chatbots will get smarter

    Needless to say, chatbots come across a large part of daily online interaction. But chatbots are turning more and more intelligent and capable of personalized interactions. With the rise of AI, big data will enable tons of data to be processed and conversations can be analysed to draw a more streamlined strategy that is more customer-focused for chatbots to be smarter.

    Is your business ready for the future of big data analytics? Keep the above predictions in mind when preparing your business for emerging technologies and think about how big data can play a role.

    Source: Datafloq

  • United Nations CITO: Artificial intelligence will be humanity's final innovation

    uncybercrime2012genevaThe United Nations Chief Information Technology Officer spoke with TechRepublic about the future of cybersecurity, social media, and how to fix the internet and build global technology for social good.

    Artificial intelligence, said United Nations chief information technology officer Atefeh Riazi, might be the last innovation humans create.

    "The next innovations," said the cabinet-level diplomat during a recent interview at her office at UN headquarters in New York, "will come through artificial intelligence."

    From then on, said Riazi, "it will be the AI innovating. We need to think about our role as technologists and we need to think about the ramifications—positive and negative—and we need to transform ourselves as innovators."

    Appointed by Secretary General Ban Ki-moon as CITO and Assistant Secretary-General of the Office of Information and Communications Technology in 2013, Riazi is also an innovator in her own right in the global security community.

    Riazi was born in Iran, and is a veteran of the information technology industry. She has a degree in electrical engineering from Stony Brook University in New York, spent over 20 years working in IT roles in the public and private sectors, and was the New York City Housing Authority's Chief Information Officer from 2009 to 2013. She has also served as the executive director of CIOs Without Borders, a non-profit organization dedicated to using technology for the good of society—especially to support healthcare projects in the developing world.

    Riazi and her UN staff meet with diplomats and world leaders, NGOs, and executives at private companies like Google and Facebook to craft technology policy that impacts governments and businesses around the world.

    TechRepublic's in-depth interview with her covered a broad range of important technology policy issues, including the digital divide, e-waste, cybersecurity, social media, and, of course, artificial intelligence.

    The Digital Divide

    TechRepublic: Access to information is essential in modern life. Can you explain how running IT for the New York City Housing Authority helps low income people?

    UN CITO: When I was at New York City Housing, I came in as a CIO. The chairman had been a CIO and within six months most of the leadership left. He looked at me. I looked at him. The board looked at me. I knew to be nervous, and they said, "you're in. You're the next acting general manager of New York City Housing." I said, "Okay."

    New York City Housing is a $3 billion organization providing support to about 500,000 residents. You have the Section 8 program, you have the public housing, and a billion and a half of construction. I came out of IT and I had to help manage and run New York City Housing at a very difficult time.

    When you look at the city of New York, the digital divide among the youth and among the poor is very high. We have a digital divide right in this great city. Today I have two eight year olds and their homework. A lot of [their] research is done online. But in other areas of the city, you have kids that don't have access to computers, don't have access to the internet, cannot afford it. They can't find jobs because they don't have access to the internet. They can't do as well in school. A lot of them are single family, maybe grandparents raising them.

    How do we provide them that access? How do we close the gap so they can compete with other classmates who have access to knowledge and information?

    In Finland, they passed a law stating that internet access is a birthright. If it's a birthright, then let's give it to people right here in New York and elsewhere in the world.

    All of the simple things that we have and we offer our children, if we could [provide internet access] as a public service, we begin to close the income gap, help people learn skills, and make them more viable for jobs.


    TechRepublic: Can you help us understand the role of electronic waste (e-waste) on women and girls in developing countries?

    UN CITO: E-waste is the mercury and lead. Mercury and lead contributes to 5% of global waste. They contribute to 70% of hazardous materials. You have computers, servers, storage, and cell phones. We have no plans on recycling these. This is polluting the air and the water in China and India. Dioxin, if you burn electronics you get dioxin, which is like agent orange. The question to the tech sector is, okay, you created this wonderful world of technology, but you have no plans in addressing these big issues of environmental hazard.

    The impact of electronic waste is tremendous because women's body looks at mercury as calcium. It brings it in, it puts it in the bones and then when you're pregnant, guess what? It thinks, oh, "I got some calcium. Here it is."

    Newborns have mercury and lead in their blood, and disease. It's just contributing to so many children, so many women getting sick and because women pass it on to the next generation, [children] are impacted.

    Where is the responsibility of the tech sector to say, "I will protect the women. I will protect the children. I will take out the lead and mercury. I will help contribute to recycling of my materials."

    The Deep Web

    TechRepublic: While there are many privacy benefits to the Deep Web, it's no secret that criminal activity flourishes on underground sites. I know this is the perpetual question, but is this criminal behavior that has always existed and now we can see it a little better, or does the Deep Web perpetuate and increase criminal behavior?

    UN CITO: I wish I had enough insight to answer correctly, but I can give it from my perspective. The scope has changed tremendously. If you look at slavery and the number of people trafficked, there's 200 million people trafficked now. You look at the numbers and you look at how much the slaves were sold [in the past]. I think the slaves were sold for [hundreds] of... today's dollars. Today, you can buy a girl for $300 through the Deep Web.

    Here's the thing. To the child trafficking, human trafficking has exploded because we're a global world. We can sell and buy globally. Before, the criminals couldn't do it globally. They couldn't move the people as fast.

    TechRepublic: If we're putting this in very cynical market terms, the market for humans has grown due to the Deep Web?

    UN CITO: Yes. The market has grown for sex trafficking, or for organs, or for just basic labor. There are many reasons where this has happened. We're seeing tremendous growth in criminal activity. It's very difficult to find criminals. Drug trafficking is easier. Commerce is easier in the Deep Web. All of that is going up.

    Humans and 99% are good but you've got the 1%, and I think we have a plan to react to the criminal activities. At the UN we are beginning to build the cyber-expertise to become a catalyst. Not to resolve these issues, because I look at the internet as an infant that we have created, this species we've created which is growing and it's evolving. It's going through "terrible twos" right now. We have a choice to try to manage it, censor it, or shut it down, which we see in some countries. Or we have a choice to build its antibody. Make sure that it becomes strong.

    We [can] create the "Light Web," and I think we can only do it through the use of all the amazing technology people globally want to [use to] do good. As a social group, we can create positive algorithms for social good.

    Encryption and cybersecurity

    TechRepublic: In the digital world, the notion of sovereignty is shifting. What is the UN's role in terms of cybersecurity?

    UN CITO: It's shifting, exactly, because government rule over a civil society in a cyber-world doesn't exist. Do you think that criminals care that the UN or governments have a policy, or a rule? Countries and criminals will begin to attack each other.

    From our perspective, our mission is really peace and security, development of human rights. The UN has a number of responsibilities. We have peacekeeping, human rights, development, and sustainable development. We look at cybersecurity, and we say that peace in the cyber-world is very different because countries are starting to attack each other, and starting to attack each [other's] industrial systems. Often attacks are asymmetrical. Peace to me is very different than peace to you.

    We talk about cybersecurity. Okay, then what do we do? This is the world we've created through the internet. What do we do to bring peace to this world? What does anyone do?

    I think that we spend a lot of money on cybersecurity globally. Public and private money, and we are not successful, really. Intrusions happen every day. Intellectual property is lost. Privacy, the way we knew it, has changed completely. There's a new way of thinking about privacy, and what's confidential.

    We worry about industrial systems like our electric grid. We worry about our member states' industrial systems, intrusions into electricity, into water, and sanitation—things that impact human life.

    Our peacekeepers are out in the field. We have helicopters. We have planes. A big worry of ours is an intrusion into a plane or helicopter, where you think the fuel gauge is full but it's empty. Or through a GPS. If your GPS is impacted, and you think you're here but you're actually there.

    Where is the role of encryption? Encryption is amoral. It could be used for good. It could be used for bad. It's hard to have an opinion on encryption, for me at least, without realizing that the same thing I endorse for everyone, others endorse for criminals. Do we have the sophistication, the capabilities to limit that technology only for the good? I don't think we do.

    TechRepublic: What is the plan for cybersecurity?

    UN CITO: Well, I've been waiting. I think that is something for all the member states to come together and talk about cybersecurity.

    But what is the plan of us as homosapiens, now we are connected sapiens and very soon we are a combination of carbon and silicon? As super intelligent beings, what is the plan? This is not being talked about. We hope that through the creation of digital Blue Helmet we'd begin a conversation and we'd begin to ask people to contribute positively to what we believe is ethically right. But then again, what we believe is ethically right somebody else may believe is ethically wrong.

    Social Media

    TechRepublic: The UN recently held a conference on social media and terrorism, particularly related to Daesh [ISIS]. What was the discussion about? What takeaways came from that conference?

    UN CITO: Well, we got together as a lot of information and communication professionals, and academics to talk about the big issue of social media and terrorism with Daesh and ISIL. I think this type of dialog is really critical because if we don't talk about these issues, we can't come up with policy recommendations. I think there's a lot of really good discussion about human rights on the internet. "Thou shalt do no harm."

    But we know that whatever policies we come up with, Daesh would be the last group that cares whether you have policies or not. There's deeper discussion about how does youth get attracted to radicalism? You have 50% unemployment of youth. You have major income disparity. I think if we can't begin to address the basic social issues, we're going to have more and more youth attracted to this radicalism. There was good discussion and dialog that we need to address those issues.

    There's some discussion about how do we create the positive message? People, especially youth, want to do something positive. They want to participate. They want to be part of a bigger thing. How do we encourage them? When they look at the negative message, how do you bring in a positive message? Can governments to do something about that?

    Look at the private sector. When there was a Tylenol scare or Toyota speeding on its own, when you went online and you searched for Tylenol, you didn't get all the bad stories about Tylenol. You went into the sites that Tylenol wanted you to go. Search is so powerful, and if you can begin to write positive algorithms, that begins to move the youth to positive messaging.

    Don't try to use marketing or gimmicks because it's so transparent. People see right through it. Governments have a responsibility to provide a positive information space for their youth. There was a lot of good dialog around that.

    On the technology side, I think this is a two year old infant, the internet is amoral, and we can use it for good and use it for bad. You can't shut down the internet. You can't shut down social media. There's a very gray space because, as I said, somebody's freedom fighter is somebody else's terrorist. Is it for Facebook or Twitter to make that decision?

    Artificial intelligence

    TechRepublic: I know you are quite curious about artificial intelligence. Is there a UN policy with respect to AI?

    UN CITO: AI is an amazing thing to talk about, because now you can look at patterns much faster than humans [can]. Do we as technologists have the sophistication of addressing the moral and ethical issues of what's good and bad?

    I think this is what scares me when it comes to AI. Let's say we as humans say, "we want people to be happy and with artificial intelligence, we should build systems for people to be happy." What does that mean?

    I'm looking at the machine language, and the path we're creating for 10, 20, 30 years from now but not fully understanding the ethical programming that we're putting into the systems. IT people are creating the next world. The ethical programming they do is what is in their head, and so policies are being written in lines of code, in the algorithms.

    We look at artificial intelligence and machine learning, and the world we see as technologists 20 years from now is very different than the world we have today. Artificial intelligence is this super, super intelligent species that is not human. Humans have reached our limitation.

    That idea poses so many questions. If we create this artificial intelligence that can do 80% of the labor that humans do, what are the changes? Social, cultural, economic. All of these big, big questions have to be talked about.

    I'm hoping that's the United Nations, but there's so much political opposition to those conversations. So much political opposition because we are holding on to our physical borders, and we have forgotten that those physical borders are gone. The world is virtual. We sit here as heads of departments and ministers and talk about AI. We discuss the moral, the ethical issues that people are going to confront with AI technology—positive and negative.

    Source: TechRepublic

  • Using the right workforce options to develop AI with the help of data

    Using the right workforce options to develop AI with the help of data

    While it may seem like artificial intelligence (AI) has hit the jackpot, a lot of work needs to be done before its potential can really come to life. In our modern take on the 20th century space race, AI developers are hard at work on the next big breakthrough that will solve a problem and establish their expertise in the market. It takes a lot of hard work for innovators to deliver on their vision for AI, and it’s the data that serves as the lifeblood for advancement.  

    One of the biggest challenges AI developers face today is to process all the data that feeds into machine learning systems, a process that requires a reliable workforce with relevant domain expertise and high standards for quality. To address these obstacles and get ahead, many innovators are taking a page from the enterprise playbook: where alternative workforce models can provide a competitive edge in a crowded market. 

    Alternative workforce options

    Deloitte’s 2018 Global Human Capital Trends study found that only 42% of organizations surveyed said their workforce is made up of traditional salaried employees. Employers expect their dependence on contract, freelance and gig workers to dramatically increase over the next few years. Acceleratingthis trend is the pressure business leaders face to improve their workforce ecosystem as alternative workforce options bring the possibility for companies to advance services, move faster and leverage new skills. 

    While AI developers might be tempted to tap into new workforce solutions, identifying the right approach for their unique needs demands careful consideration. Here’s an overview of common workforce options and considerations for companies to select the right strategy for cleaning and structuring the messy, raw data that holds the potential to add rocket fuel to your AI efforts:

    • In-house employees: The first line of defense for most companies, internal teams can typically manage data needs with reasonably good quality. However, these processes often grow more difficult and costlier to manage as things progress, calling for a change of plans when it’s time to scale. That’s when companies are likely to turn to alternative workforce options to help structure data for AI development.
    • Contractors and freelancers: This is a common alternative to in-house teams, but business leaders will want to factor in extra time it will take to source and manage their freelance team. One-third of Deloitte’s survey respondents said their human resources (HR) departments are not involved in sourcing (39%) or hiring (35%) decisions for contract employees, which 'suggests that these workers are not subject to the cultural, skills, and other forms of assessments used for full-time employees'. That can be a problem when it comes to ensuring quality work, so companies should allocate additional time for sourcing, training and management.
    • Crowdsourcing: Crowdsourcing leverages the cloud to send data tasks to a large number of people at once. Quality is established using consensus, which means several people complete the same task. The answer provided by the majority of the workers is chosen as correct. Crowd workers are paid based on the number of tasks they complete on the platform provided by the workforce vendor, so it can take more time to process data outputs than it would with an in-house team. This can make crowdsourcing a less viable option for companies that are looking to scale quickly, particularly if their work requires a high level of quality, as with data that provides the intelligence for a self-driving car, for example.
    • Managed cloud workers: A solution that has emerged over the last decade, combining the quality of a trained, in-house team with the scalability of the crowd. It’s ideally suited for data work because dedicated teams develop expertise in a company’s business rules over time by sticking with projects for a longer period of time. That means they can increase their context and domain knowledge while providing consistently high data quality. However, teams need to be managed in ways that optimize productivity and engagement, and that takes something. Companies should look for partners with tested procedures for communication and process.

    Getting down to business

    From founders and data scientists to product owners and engineers, AI developers are fighting an uphill battle. They need all the support they can get, and that includes a dedicated team to process the data that serves as the lifeblood of AI and machine learning systems. When you combine the training and management challenges that AI developers face, workforce choices might just be the factor that determines success. With the right workforce strategy, companies will have the flexibility to respond to changes in market conditions, product development and business requirements.

    As with the space race, the pursuit AI in the real world holds untold promise, but victory won’t come easy. Progress is hard-won, and innovators who identify strong workforce partners will have the tools and talent they need to test their models, fail faster and ultimately get it right quicker. Companies that make this process a priority now can ensure they’re in the best position to break away from the competition as the AI race continues.

    Author: Mark Sears

    Source: Dataconomy

  • What about the relation between AI and machine learning?

    Artificial intelligenceartificial intelligence machine learning is one of the most compelling areas of computer science research. AI technologies have gone through periods of innovation and growth, but never has AI research and development seemed as promising as it does now. This is due in part to amazing developments within machine learning, deep learning, and neural networks.

    Machine learning, a cutting-edge branch of artificial intelligence, is propelling the AI field further than ever before. While AI assistants like Siri, Cortana, and Bixby are useful, if not amusing, applications of AI, they lack the ability to learn, self-correct, and self-improve. 

    They are unable to operate outside of their code, learn independently, and apply past experiences to new problems. Machine learning is changing that. Machines are able to grow outside their original code which allows them to mimic the cognitive processes of the human mind.

    Why is machine learning important for AI? As you have most likely already gathered, machine learning is the branch of AI dedicated to endowing machines with the ability to learn. While there are programs that help sort your email, provide you with personalized recommendations based on your online shopping behavior, and make playlists based on music you like, these programs lack the ability to truly think for themselves. 

    These “weak AI” programs are able to analyze data well and conjure up impressive responses, they are far cry from true artificial intelligence. The only way to arrive at anything close to true artificial intelligence would require a machine to learn. A machine with true artificial intelligence, also known as artificial general intelligence, would be aware of its environment and would manipulate that environment to achieve its goals. A machine with artificial general intelligence would be no different from a human, who is aware of his or her surroundings and uses that awareness to arrive at solutions to problems occurring within those surroundings.

    You may be familiar with the infamous AlphaGo program that beat a professional Go player in 2016 to the chagrin of many professional Go players. While AI has been able to beat chess players in the past, the AI win came as an incredible shock to Go players and AI researchers alike. Surpassing Go players was previously thought to be impossible given that each move in the ancient has almost infinite permutations. Decisions in Go are so intricate and complex that it was thought that the game required human intuition. As it so happens, Go does not require human intuition, it only requires general-purpose learning algorithms.

    How were these general-purpose learning algorithms crafted? The AlphaGo program was created DeepMind Technologies, an AI company acquired by Google in 2014 that managed to create a neural network as well as a model that allowed for machines to mimic short-term memory utilizing researchers as well as C++, Lua, and Python developers. The neural network and the short-term memory model are applications of deep learning, a cutting-edge branch of machine learning.

    Deep learning is an approach to machine learning in which software emulates the human brain. Currently, machine learning applications allow for a machine to train in a certain task by analyzing examples of that task. Deep learning allows for machines to learn in a more general way. So, instead of simply mimicking cognitive functioning in a predefined task, machines are endowed with what can be thought of as a sort of artificial brain. This artificial brain is called a artificial neural network, or neural net for short.

    There are several neural net models in use today, and all use mathematics to copy the structure of the human brain. Neural nets are divided into layers, and consist of thousands, sometimes millions, of interconnected processing nodes. Connections between nodes is given a weight. If the weight is over a predefined threshold, then the node’s data is sent through the next layer. These nodes act as artificial neurons, sharing clusters of data and storing experience and knowledge based on that data, and firing off new bits of information. These nodes interact dynamically and change thresholds and weights as they learn from experience.

    Machine learning and deep learning are exciting and alarming areas of research within AI. Endowing machines with the ability to learn certain tasks could be extremely useful, could increase productivity, and help expedite all sorts of activities, from search algorithms to data mining. Deep learning provides even more opportunities for AI’s growth. As researchers delve deeper into deep learning, we could see machines that understand the mechanics behind learning itself, rather than simply mimicking intellectual tasks

    Author: Greg Robinson

    Source: Information Management

  • What is the impact of AI on cybersecurity?

    What is the impact of AI on cybersecurity?

    In today's technology-driven world we are becoming increasingly dependent on various technological tools to help us finish everyday tasks much faster or even do them for us, artificial intelligence being the most advanced one. While some welcome it open-handed, others are more wary, urging for increased protection.

    We cannot deny how much AI has infiltrated our lives. We are surrounded by it every day, which many don't even realize. Some of its simplest forms are virtual assistants (VA) used by 72% of the consumers in the USA. AI is advancing super-fast, causing serious ethical discussions.

    Not long ago some of the world's most brilliant minds like Stephen Hawking and Elon Musk have warned about the possible ramifications if the development of artificial intelligence wasn't controlled. Hawking even stated that AI could be the worst event in the history of our civilization. But whether we like it or not, the dominance of autonomous technology is inevitable.

    Security in the first place

    When it comes to cybersecurity, companies are spending huge amounts of money on maximizing its efficiency, in the face of the continually growing rates of cybercrime (up by 11% since last year). It's not surprising since the average cost of cybercrime has increased to $13 million, with average 145 security breaches in 2019, and counting.

    Companies should not worry only about losing money and their own sensitive data, but about losing their customers as well. An IBM poll showed that 78% of respondents think that the company's ability to safeguard their private data is 'extremely' important, while 75% would not buy any of their products, no matter how great they are, if they don ́t believe they are able to protect their data.

    Due to a huge shortage of qualified cybersecurity professionals, with almost 3 million open positions, companies are more and more turning to implement AI into their cybersecurity protection systems. It is expected that by 2024 AI cybersecurity market will reach a staggering $35 billion, with businesses recognizing the need to implement an advanced technology which will keep pace with the fast-evolving cybercrime.

    But how safe is AI?

    While AI can contribute to an increased level of cyber protection, by assisting cybersecurity experts in reducing their workload and in time, with their learning algorithms, by adapting and detecting new threats much faster (today it takes more than half a year in average to detect a data breach), there is also the other side of the coin to consider.

    Just as cybercriminals can manipulate people to obtain sensitive information, they can do the same with artificial intelligence, taking spear-fishing to a whole new level. This represents a serious concern, with a vast majority (91%) of US and Japan professionals expecting that companies' AI will be used against them. The same applies to VAs, which record and store everything we say (personal information, business-related information, passwords, financial information…) which can be obtained by hackers.

    Detecting new vulnerabilities can become much easier with AI, while their ability to make independent decisions can be compromised, which can stay undetected for a while. This represents a huge potential for cybercriminals to launch massive attacks in disguise, especially if they use their own AIs to make these attacks more sophisticated or to build new types of malware. Another concern is that with an AI cybersecurity protection system in place, employees might fall into a false sense of security, thus becoming less cautious.


    With AI inevitably becoming an integral part of business protection systems worldwide, it is important to consider all of its aspects when introducing it, both good and bad. 

    With companies investing huge resources in their perfection, cybersecurity experts should simultaneously focus on minimizing any possibilities of AI being exploited by cybercriminals.

    Source: Datafloq

  • Where Artificial Intelligence Is Now and What’s Just Around the Corner

    artificial-intelligence-predictions-2-234x156Unexpected convergent consequences...this is what happens when eight different exponential technologies all explode onto the scene at once.

    This post (the second of seven) is a look at artificial intelligence. Future posts will look at other tech areas.

    An expert might be reasonably good at predicting the growth of a single exponential technology (e.g., the Internet of Things), but try to predict the future when A.I., robotics, VR, synthetic biology and computation are all doubling, morphing and recombining. You have a very exciting (read: unpredictable) future. ​ This year at my Abundance 360 Summit I decided to explore this concept in sessions I called "Convergence Catalyzers."

    For each technology, I brought in an industry expert to identify their Top 5 Recent Breakthroughs (2012-2015) and their Top 5 Anticipated Breakthroughs (2016-2018). Then, we explored the patterns that emerged.

    Artificial Intelligence — Context

    At A360 this year, my expert on AI was Stephen Gold, the CMO and VP of Business Development and Partner Programs at IBM Watson. Here's some context before we dive in.

    Artificial intelligence is the ability of a computer to understand what you're asking and then infer the best possible answer from all the available evidence.

    You may think of AI as Siri or Google Now on your iPhone, Jarvis from Iron Man or IBM's Watson.

    Progress of late is furious — an AI R&D arms race is underway among the world's top technology giants.

    Soon AI will become the most important human collaboration tool ever created, amplifying our abilities and providing a simple user interface to all exponential technologies. Ultimately, it's helping us speed toward a world of abundance.

    The implications of true AI are staggering, and I asked Stephen to share his top five breakthroughs from recent years to illustrate some of them.

    Recent Top 5 Breakthroughs in AI: 2011 - 2015

    "It's amazing," said Gold. "For 50 years, we've ideated about this idea of artificial intelligence. But it's only been in the last few years that we've seen a fundamental transformation in this technology."

    Here are the breakthroughs Stephen identified in artificial intelligence research from 2011-2015:

    1. IBM Watson wins Jeopardy demo's integration of natural language processing, machine learning (ML), and big data.

    In 2011, IBM's AI system, dubbed "Watson," won a game of Jeopardy against the top two all-time champions.

    This was a historic moment, the "Kitty Hawk moment" for artificial intelligence.

    "It was really the first substantial, commercial demonstration of the power of this technology," explained Gold. "We wanted to prove a point that you could bring together some very unique technologies: natural language technologies, artificial intelligence, the context, the machine learning and deep learning, analytics and data and do something purposeful that ideally could be commercialized."

    2. Siri/Google Now redefine human-data interaction.

    In the past few years, systems like Siri and Google Now opened our minds to the idea that we don't have to be tethered to a laptop to have seamless interaction with information.

    In this model, AIs will move from speech recognition to natural language interaction, to natural language generation, and eventually to an ability to write as well as receive information.

    3. Deep learning demonstrates how machines learn on their own, advance and adapt.

    "Machine learning is about man assisting computers. Deep learning is about systems beginning to progress and learn on their own," says Gold. "Historically, systems have always been trained. They've been programmed. And, over time, the programming languages changed. We certainly moved beyond FORTRAN and BASIC, but we've always been limited to this idea of conventional rules and logic and structured data."

    As we move into the area of AI and cognitive computing, we're exploring the ability of computers to do more unaided/unassisted learning.

    4. Image recognition and interpretation now rivals what humans can do — allowing for imagine interpretation and anomaly detection.

    Image recognition has exploded over the last few years. Facebook and Google Photos, for example, each have tens of billions of images on their platform. With this dataset, they (and many others) are developing technologies that go beyond facial recognition providing algorithms that can tell you what is in the image: a boat, plane, car, cat, dog, and so on.

    The crazy part is that the algorithms are better than humans at recognizing images. The implications are enormous. "Imagine," says Gold, "an AI able to examine an X-ray or CAT scan or MRI to report what looks abnormal."

    5. AI Apps proliferate: universities scramble to adopt AI curriculum

    As AI begins to impact every industry and every profession, there is a response where schools and universities are ramping up their AI and machine learning curriculum. IBM, for example, is working with over 150 partners to present both business and technology-oriented students with cognitive computing curricula.

    So what's in store for the near future?

    Anticipated Top AI Breakthroughs: 2016 – 2018

    Here are Gold's predictions for the most exciting, disruptive developments coming in AI in the next three years. As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.

    1. Next-gen A.I. systems will beat the Turing Test

    Alan Turing created the Turing Test over half a century ago as a way to determine a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.

    Loosely, if an artificial system passed the Turing Test, it could be considered "AI."

    Gold believes, "that for all practical purposes, these systems will pass the Turing Test" in the next three-year period.

    Perhaps more importantly, if it does, this event will accelerate the conversation about the proper use of these technologies and their applications.

    2. All five human senses (yes, including taste, smell and touch) will become part of the normal computing experience.

    AIs will begin to sense and use all five senses. "The sense of touch, smell, and hearing will become prominent in the use of AI," explained Gold. "It will begin to process all that additional incremental information."

    When applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses.

    3. Solving big problems: detect and deter terrorism, manage global climate change.

    AI will help solve some of society's most daunting challenges.

    Gold continues, "We've discussed AI's impact on healthcare. We're already seeing this technology being deployed in governments to assist in the understanding and preemptive discovery of terrorist activity."

    We'll see revolutions in how we manage climate change, redesign and democratize education, make scientific discoveries, leverage energy resources, and develop solutions to difficult problems.

    4. Leverage ALL health data (genomic, phenotypic, social) to redefine the practice of medicine.

    "I think AI's effect on healthcare will be far more pervasive and far quicker than anyone anticipates," says Gold. "Even today, AI/machine learning is being used in oncology to identify optimal treatment patterns."

    But it goes far beyond this. AI is being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences.

    5. AI will be woven into the very fabric of our lives — physically and virtually.

    Ultimately, during the AI revolution taking place in the next three years, AIs will be integrated into everything around us, combining sensors and networks and making all systems "smart."

    AIs will push forward the ideas of transparency, of seamless interaction with devices and information, making everything personalized and easy to use. We'll be able to harness that sensor data and put it into an actionable form, at the moment when we need to make a decision.

    Source: SingularityHub

  • Why we should be aware of AI bias in lending

    Why we should be aware of AI bias in lending

    It seems that beyond all the hype AI (artificial intelligence) applications in lending do speed up and automate decision-making.

    Indeed, a couple of months ago Upstart, an AI-leveraging fintech startup, announced that it had raised a total of $160 million since inception. It also inked deals with the First National Bank of Omaha and the First Federal Bank of Kansas City.

    Upstart won recognition due to its innovative approach toward lending. The platform identifies who should get a loan and of what amount using AI trained with the so-called ‘alternative data’. Such alternative data can include information on an applicant’s purchases, type of phone, favorite games, and social media friends’ average credit score.

    However, the use of alternative data in lending is still far from making the process faster, fairer, and wholly GDPR-compliant. Besides, it's not an absolute novelty.

    Early credit agencies hired specialists to dig into local gossip on their customers, while back in 1935 neighborhoods in the U.S. got classified according to their collective creditworthiness. In a more recent case from 2002, a Canadian Tire executive analyzed last year’s transactional data to discover that customers buying roof cleaning tools were more financially reliable than those purchasing cheap motor oil.

    There's one significant difference to the past and the present, however. Earlier, it was a human who collected and processed both alternative and traditional data, including debt-to-income, loan-to-value, and individual credit history. Now, the algorithm is stepping forward as many believe it to be more objective as well as faster.

    What gives cause for concern, though, is that AI can turn out to be no less biased than humans. Heads up: if we don’t control how the algorithm self-learns, AI can go even more one-sided.

    Where AI bias creeps in

    Generally, AI bias doesn’t happen by accident. People who train the algorithm make it subjective. Influenced by some personal, cultural, educational, and location-specific factors, even the best algorithm trainers might use inherently prejudiced input data.

    If not detected timely, it can result in biased decisions, which will only aggravate with time. That's because the algorithm takes its new decisions based on the previous ones. Evolving on its own, it ends up being much more complex than in the beginning of its operation (the classical snowball effect). In plain words, it continuously learns by itself, whether the educational material is correct or not.

    Now, let’s look at how exactly AI might discriminate in the lending decisions it makes. Looking at the examples below, you'll easily follow the key idea: AI bias often goes back to human prejudice.

    AI can discriminate based on gender

    While there are traditionally more men in senior and higher-paid positions, women continue facing the so-called ‘glass ceiling’ and pay gap problems. As a result, even though women on average tend to be better savers and payers, female entrepreneurs continue receiving fewer and smaller business loans compared to men.

    The use of AI might only worsen the tendency, since the sexist input data can lead to a spate of loan denials among women. Relying on misrepresentational statistics, AI algorithms might rather favor a male applicant over a female one even if all other parameters are relatively similar.

    AI can discriminate based on race

    This sounds harsh, but black applicants are twice as likely to be refused mortgage as white ones. If the input data used for the algorithm learning reflects such a racial disparity, it can put it into practice pretty fast and start causing more and more denials.

    Alternative data can also become the source of 'AI racism’. Consider the algorithm using the seemingly neutral information on an applicant’s prior fines and arrests. The truth is, such information is not neutral. According to The Washington Post, African-Americans become policing targets much more frequently than white population, and in many cases baselessly.

    The same goes for some other types of data. Racial minorities face inequality in occupation, and neighborhoods they live in. All of these kinds of metrics might become solid reasons for AI to say ‘no’ to a non-white applicant.

    AI can discriminate based on age

    The longer a credit history, the more we know about a particular person’s creditworthiness. Older people typically have larger credit histories, as there are more financial transactions behind their backs.

    The young generation, on the contrary, has less data about their operations, which can become an unfair reason for a credit denial.

    AI can discriminate based on education

    Consider an AI lending algorithm that analyzes an applicant’s grammar and spelling while making credit decisions. An algorithm might ‘learn’ that bad spelling habits or constant typos point to poor education and, consequently, bad creditworthiness.

    In the long run, the algorithm can start avoiding qualifying individuals with writing difficulties or disorders even if those have nothing to do with such people’s ability to pay bills.

    Tackling prejudice in lending

    Overall, in order to make AI-run loan processes free of bias, it's crucial to make the input data clean from any possible human prejudice, from misogyny and racism to ageism.

    To make training data more neutral, organizations should form more diverse AI development teams of both lenders and data scientists, where the former can inform engineers on the specifics of their job. What's more, such financial organizations should train everyone involved in making decisions with AI to adhere and enforce fair and non-discriminatory practices in their work. Otherwise, without taking measures to ensure diversity and inclusivity, lending businesses risk to generate AI algorithms that can severely violate anti-discrimination and fair-lending laws.

    Another step toward fairer AI is to make sure that there are no lending decisions made solely by the algorithm; a human supervisor should assess these decisions before they make a real-life impact. Article 22 of the GDPR stands with it, claiming that people should not be subjected to purely automated decision-making, specifically if this can have a legal effect.

    The truth is, this is easier said than done. However, if not addressed, the problem of unintentional AI bias might put lending businesses in a tough spot no less than any intentional act of bias, and only through collective effort of data scientists and lending professionals can we avert imminent risks. 

    Author: Yaroslav Kuflinski

    Source: Information-management

  • Wie domineert straks: de mens of de machine?

    mens of machineDe ontwikkelingen op informatie-technologisch gebied gaan snel en misschien wel steeds sneller. We horen en zien steeds meer van business intelligence, self service BI, artificial intelligence en machine learning. We zien dit terug bij werknemers die steeds meer de beschikking hebben over stuurinformatie via tools, zelfsturende auto’s, robots voor dementerenden maar ook computers die de mens verslaan spelletjes.

    Wat betekent dit?

    • Verdienmodel van bedrijven zullen anders worden
    • Innovaties komen misschien niet meer primair van de mens
    • Veel meer nu nog menselijke arbeid zal door machines worden overgenomen.

    Een paar ontwikkelingen in dit artikel worden uitgelicht om aan te geven hoe belangrijk business intelligence vandaag de dag is.

    Verdienmodel op basis van data

    Dat de informatietechnologie bestaande verdienmodellen op z’n kop zet lezen we dagelijks. We hoeven alleen maar naar V&D te kijken. De hoeveelheid bedrijven  die gebruik maken van een business model waarbij externe dataverzameling en analyse een cruciaal onderdeel is van het verdienmodel neemt hand over hand toe. Zelfs in tot nu toe sterk gedomineerde overheidssectoren zoals onderwijs of gezondheidszorg. Bekende bedrijven, zoals Google en Facebook, zijn overigens zonder concreet verdienmodel begonnen, maar zouden niets meer kunnen zonder genoemde data(analyse).


    Neem bijvoorbeeld een bedrijf als Amazon dat volledig draait op data. De verzamelde data heeft in grote mate betrekking op wie we zijn, hoe we ons gedragen en op onze voorkeuren. Amazon geeft deze data steeds meer betekenis door de toepassingen van de nieuwste technologieën. Een voorbeeld is hoe Amazon zelfs films en boeken ontwikkelt op basis van ons aankoop, kijk- en leesgedrag en hier zal het zeker niet bij blijven. Volgens Gartner is Amazon een van meeste leidende en visionaire spelers in de markt voor Infrastructure as a Service (IaaS). Bovendien prijst Gartner Amazon voor haar snelle manier van anticiperen op de technologische behoeftes uit de markt.


    Volgens de Verenigde Naties zullen de nieuwste innovaties ontstaan vanuit kunstmatige intelligentie. Dit veronderstelt dat de machine de mens passeert met betrekking het bedenken van vernieuwingen. De IBM Watson-computer heeft bijvoorbeeld de mens al verslagen met het spelprogramma Jeopardy. Met moeilijke wiskundige berekening kunnen we niet meer zonder computer, maar dat wil nog niet zeggen dat de computer de mens overal in voorbij streeft. Met de ontwikkeling van zelfsturende auto’s is onlangs aangetoond dat middels machine learning de mens nog steeds leidend kan zijn en per saldo was er veel minder ontwikkelingstijd nodig.

    Mens of machine?

    Een feit is dat de machine steeds meer taken van de mens gaat overnemen en de mens in denkvermogen soms zelfs gaat overtreffen. De mens en machine zullen in de komende periode steeds meer naast elkaar gaan leven en de computer zal het menselijk handelen steeds beter begrijpen en beheersen. Het gevolg is, dat bestaande business modellen zullen gaan veranderen en veel banen in bestaande sectoren verloren zullen gaan. Maar of de computer de mens voorbij streeft en dat in de toekomst zelfs alleen innovatie via kunstmatige intelligentie komt is nog maar de vraag? Ok de industriële revolutie heeft een zeer grote impact op de mensheid gehad en terugkijkend heeft deze vele voordelen gebracht al zal het voor velen in die tijd niet altijd gemakkelijk geweest zijn. Laten we kijken hoe we hier ons voordeel mee kunnen doen. Geïnteresseerd? Klik hier voor meer informatie.

    Ruud Koopmans, RK-Intelligentie.nl, 29 februari 2016


EasyTagCloud v2.8