93 items tagged "artificial intelligence "

  • 'We moeten ons voorbereiden op massawerkloosheid door robots'

    kunstmatige-intelligentie 0De opkomst van robots en kunstmatige intelligentie zorgt ervoor dat steeds meer banen verdwijnen. We moeten ons nu al voorbereiden op massawerkloosheid, waarschuwen hoogleraren.

    We komen in een tijdperk waarin machines bijna alle menselijke taken kunnen overnemen, zei hoogleraar computertechnologie Moshe Vardi van de Universiteit van Texas dit weekend op een Amerikaanse beurs, schrijft FT. Er blijft geen sector vrij van robots, dus de grote vraag volgens de wetenschappers is: als robots ons werk overnemen, wat gaan wij dan doen?

    Vardi voegt daaraan toe dat we allemaal wel leuke dingen kunnen gaan doen, maar dat een leven dat alleen draait om vrije tijd ook niet alles is. "Ik geloof dat werk essentieel is voor het welzijn van mensen."

    'Overheden nog niet klaar voor'
    Bedrijven als Google, Facebook, IBM en Microsoft schalen dit jaar hun investeringen in kunstmatige intelligentie op naar miljarden, maar overheden lijken daar nog niet op voorbereid, stelden experts op de beurs.

    Op initiatief van Bart Selman, professor computertechnologie aan Cornell University, is vorig jaar een open brief opgesteld aan beleidsmakers om erop aan te dringen de risico's in kaart te brengen van de steeds slimmer wordende machines. De brief is ondertekend door 10.000 ondernemers, professoren en technici, waaronder Tesla-oprichter Elon Musk.

    Musk steekt via de non-profitorganisatie OpenAI geld in onderzoek naar kunstmatige intelligentie en hoe mensen er het meest van kunnen profiteren. Hij ziet kunstmatige intelligentie als één van de grootste bedreigingen voor de mensheid.

    Source: RTL Z

  • 2016 wordt het jaar van de kunstmatige intelligentie

    Artificial-intelligence.jpg-1024x678December is traditiegetrouw de periode van het jaar om terug te blikken en oudjaarsdag is daarbij in het bijzonder natuurlijk de beste dag voor. Bij Numrush kijken we echter liever vooruit. Dat deden we begin december al met ons RUSH Magazine. In deze Gift Guide gaven we cadeautips aan de hand van een aantal thema’s waar we komend jaar veel over gaan horen.Eén onderwerp bleef bewust een beetje onderbelicht in onze Gift Guide. Aan de ene kant omdat het niet iets is wat je cadeau geeft, maar ook omdat het eigenlijk de diverse thema’s overstijgt. Ik heb het over kunstmatige intelligentie. Dat is natuurlijk niets nieuws, er is al ontzettend veel gebeurt op dat vlak, maar komend jaar zal de toepassing hiervan nog verder in een stroomversnelling raken.

  • 2017 Investment Management Outlook

    2017 investment management outlook infographic

    Several major trends will likely impact the investment management industry in the coming year. These include shifts in buyer behavior as the Millennial generation becomes a greater force in the investing marketplace; increased regulation from the Securities and Exchange Commission (SEC); and the transformative effect that blockchain, robotic process automation, and other
    emerging technologies will have on the industry.

    Economic outlook: Is a major stimulus package in the offing?

    President-elect Donald Trump may have to depend heavily on private-sector funding to proceed with his $1 trillion infrastructure spending program, considering Congress ongoing reluctance to increase spending. The US economy may be nearing full employment with the younger cohorts entering the labor market as more Baby Boomers retire. In addition, the prospects for a fiscal stimulus seem greater now than they were before the 2016 presidential election.

    Steady improvement and stability is the most likely scenario for 2017. Although weak foreign demand may continue to weigh on growth, domestic demand should be strong enough to provide employment for workers returning to the labor force, as the unemployment rate is expected to remain at approximately 5 percent. GDP annual growth is likely to hit a maximum of 2.5 percent. In the medium term, low productivity growth will likely put a ceiling on the economy, and by 2019, US GDP growth may be below 2 percent, despite the fact that the labor market might be at full employment. Inflation is expected to remain subdued. Interest rates are likely to rise in 2017, but should remain at historically low levels throughout the year. If the forecast holds, asset allocation shifts among cash, commodities, and fixed income may begin by the end of 2017.

    Investment industry outlook: Building upon last year’s performance
    Mutual funds and exchange-traded funds (ETFs) have experienced positive growth. Worldwide regulated funds grew at 9.1 percent CAGR versus 8.6 percent by US mutual funds and ETFs. Non-US investments grew at a slightly faster pace due to global demand. Both worldwide and US investments seem to show declining demand in 2016 as returns remained low.

    Hedge fund assets have experienced steady growth over the past five years, even through performance swings.

    Private equity investments continued a track record of strong asset appreciation. Private equity has continued to attract investment even with current high valuations. Fundraising increased incrementally over the past five years as investors increased allocations in the sector.

    Shifts in investor buying behavior: Here come the Millennials
    Both institutional and retail customers are expected to continue to drive change in the investment management industry. The two customer segments are voicing concerns about fee sensitivity and transparency. Firms that enhance the customer experience and position advice, insight, and expertise as components of value should have a strong chance to set themselves apart from their competitors.

    Leading firms may get out in front of these issues in 2017 by developing efficient data structures to facilitate accounting and reporting and by making client engagement a key priority. On the retail front, the SEC is acting on retail investors’ behalf with reporting modernization rule changes for mutual funds. This focus on engagement, transparency, and relationship over product sales are integral to creating a strong brand as a fiduciary, and they may prove to differentiate some firms in 2017.

    Growth in index funds and other passive investments should continue as customers react to market volatility. Investors favor the passive approach in all environments, as shown by net flows. They are using passive investments alongside active investments, rather than replacing the latter with the former. Managers will likely continue to add index share classes and index-tracking ETFs in 2017, even if profitability is challenged. In addition, the Department of Labor’s new fiduciary rule is expected to promote passive investments as firms alter their product offerings for retirement accounts.

    Members of the Millennial generation—which comprises individuals born between 1980 and 2000—often approach investing differently due to their open use of social media and interactions with people and institutions. This market segment faces different challenges than earlier generations, which influences their use of financial services.

    Millennials may be less prosperous than their parents and may need to own less in order to fully fund retirement. Many start their careers burdened by student debt. They may have a negative memory of recent stock market volatility, distrust financial institutions, favor socially conscious investments, and rely on recommendations from their friends when seeking financial advice.

    Investment managers likely need to consider several steps when targeting Millennials. These include revisiting product lines, offering socially conscious “impact investments,” assigning Millennial advisers to client service teams, and employing digital and mobile channels to reach and serve this market segment.

    Regulatory developments: Seeking greater transparency, incentive alignment, and risk control
    Even with a change in leadership in the White House and at the SEC, outgoing Chair Mary Jo White’s major initiatives are expected to endure in 2017 as they seek to enhance transparency, incentive alignment, and risk control, all of which build confidence in the markets. These changes include the following:

    Reporting modernization. Passed in October 2016, this new requirement of forms, rules, and amendments for information disclosure and standardization will require development by registered investment companies (RICs). Advisers will need technology solutions that can capture data that may not currently exist from multiple sources; perform high-frequency calculations; and file requisite forms with the SEC.

    Liquidity risk management (LRM). Passed in October 2016, this rule requires the establishment of LRM programs by open-end funds (except money market) and ETFs to reduce the risk of inability to meet redemption requirements without dilution of the interests of remaining shareholders.

    Swing pricing. Also passed in October 2016, this regulation provides an option for open-end funds (except money market and ETFs) to adjust net asset values to pass the costs stemming from purchase and redemption activity to shareholders.

    Use of derivatives. Proposed in December 2015, this requires RICs and business development companies to limit the use of derivatives and put risk management measures in place.

    Business continuity and transition plans. Proposed in June 2016, this measure requires registered investment advisers to implement written business continuity and transition plans to address operational risk arising from disruptions.

    The Dodd-Frank Act, Section 956. Reproposed in May 2016, this rule prohibits compensation structures that encourage individuals to take inappropriate risks that may result in either excessive compensation or material loss.

    The DOL’s Conflict-of-Interest Rule. In 2017, firms must comply with this major expansion of the “investment advice fiduciary” definition under the Employee Retirement Income Security Act of 1974. There are two phases to compliance:

    Phase one requires compliance with investment advice standards by April 10, 2017. Distribution firms and advisers must adhere to the impartial conduct standards, provide a notice to retirement investors that acknowledge their fiduciary status, and describes their material conflicts of interest. Firms must also designate a person responsible for addressing material conflicts of interest monitoring advisers' adherence to the impartial conduct standards.

    Phase two requires compliance with exemption requirements by January 1, 2018. Distribution firms must be in full compliance with exemptions, including contracts, disclosures, policies and procedures, and documentation showing compliance.

    Investment managers may need to create new, customized share classes driven by distributor requirements; drop distribution of certain share classes post-rule implementation, and offer more fee reductions for mutual funds.

    Financial advisers may need to take another look at fee-based models, if they are not using already them; evolve their viewpoint on share classes; consider moving to zero-revenue share lineups; and contemplate higher use of ETFs, including active ETFs with a low-cost structure and 22(b) exemption (which enables broker-dealers to set commission levels on their own).

    Retirement plan advisers may need to look for low-cost share classes (R1-R6) to be included in plan options and potentially new low-cost structures.

    Key technologies: Transforming the enterprise

    Investment management poised to become even more driven by advances in technology in 2017, as digital innovations play a greater role than ever before.

    Blockchain. A secure and effective technology for tracking transactions, blockchain should move closer to commercial implementation in 2017. Already, many blockchain-based use cases and prototypes can be found across the investment management landscape. With testing and regulatory approvals, it might take one to two years before commercial rollout becomes more widespread.

    Big data, artificial intelligence, and machine learning. Leading asset management firms are combining big data analytics along with artificial intelligence (AI) and machine learning to achieve two objectives: (1) provide insights and analysis for investment selection to generate alpha, and (2) improve cost effectiveness by leveraging expensive human analyst resources with scalable technology. Expect this trend to gain momentum in 2017.

    Robo-advisers. Fiduciary standards and regulations should drive the adoption of robo-advisers, online investment management services that provide automated, portfolio management advice. Improvements in computing power are making robo-advisers more viable for both retail and institutional investors. In addition, some cutting-edge robo-adviser firms could emerge with AI-supported investment decision and asset allocation algorithms in 2017.

    Robotic process automation. Look for more investment management firms to employ sophisticated robotic process automation (RPA) tools to streamline both front- and back-office functions in 2017. RPA can automate critical tasks that require manual intervention, are performed frequently, and consume a signifcant amount of time, such as client onboarding and regulatory compliance.

    Change, development, and opportunity
    The outlook for the investment management industry in 2017 is one of change, development, and opportunity. Investment management firms that execute plans that help them anticipate demographic shifts, improve efficiency and decision making with technology, and keep pace with regulatory changes will likely find themselves ahead of the competition.

    Download 2017 Investment management industry outlook

    Source: Deloitte.com


  • 3 AI and data science applications that can help dealing with COVID-19

    3 AI and data science applications that can help dealing with COVID-19

    All industries already feel the impact of the current COVID-19 pandemic on the economy. As many businesses had to shut down and either switch to telework or let go of their entire staff, there is no doubt that it will take a long time for the world to recover from this crisis.

    Current prospects on the growth of the global economy, shared by different sources, support the idea of the long and painful recovery of the global economy from the COVID-19 crisis.
    Statista, for example, compares the initial GDP growth prognosis for 2020 and the prognosis based on the impact of the novel coronavirus on the GPD growth, estimating the difference of as much as 0.5%.

    The last time that global GDP experienced such a decline was back in 2008 when the global economic crisis affected every industry with no exceptions.

    In the situation with the current pandemic, we also see that different industries change their growth prognoses.
    The IT industry, for instance, the expected spending growth in 2020 doesn’t even exceed the pessimistic scenario related to the coronavirus pandemic, and is even expected to shrink.

    It would be foolish to claim that the negative effect of the COVID-19 crisis can be reversed. It is already our reality that many businesses and industries around the world will suffer during the current global economic crisis.
    Governments around the world responded to this crisis by helping businesses not go bankrupt with state financial support. However, this support is only expected to have a short-term effect and will hardly mitigate the final effect of the global economic crisis on businesses around the world.

    So, in search of solutions to decrease the negative effect of drowning global economics, the world, among all other sources, will likely turn to the help of technology, just as the entire world did when it was forced to work from home.

    In this article, we offer our stance on how AI and data scientists, in particular, can help respond to the COVID-19 crisis and help relieve its negative effect.

    1. Data science and healthcare system

    The biggest negative effect on the global economy can come from failing healthcare systems. It was the reason why governments around the world ordered citizens to stay at home and self-isolate, as, in many cases, the course of the COVID-19 disease can be asymptomatic.

    Is increasing investment in the healthcare system a bad thing altogether?

    No, if we are talking about healthcare systems at a local level, like a state or a province. “At a local level, increasing investments in the healthcare system increases the demand for related products and equipment in direct ratio,” says Dorian Martin, a researcher at WowGrade.

    However, in case local governments run out of money in their emergency budgets, they might have to ask the state government for financial support.

    This scenario could become our reality if the number of infected people rapidly increases, with hospitals potentially running out of equipment, beds, and, most critically, staff.

    What can data science do to help manage this crisis?

    UK’s NHS healthcare data storage

    Some countries are already preparing for the scenario described above with the help of data scientists.
    For instance, the UK government ordered NHS England to develop a data store that would combine multiple data sources and make them deliver information to one secure cloud storage.
    What will this data include?

    This cloud storage will help NHS healthcare workers access information on the movement of the critical staff, the availability of hospital beds and equipment.

    Apart from that, this data storage will help the government to get a comprehensive and accurate view of the current situation to detect anomalies, and make timely decisions based on real data received from hospitals and NHS partner organizations.

    Thus, the UK government and NHS are looking into data science to create a system that will help the country tackle the crisis consistently, and manage the supply and demand for critical hospital equipment needed to fight the pandemic.

    2. AI’s part in creating the COVID-19 vaccine

    Another critical factor that has an effect on the current global economic crisis is the COVID-19 vaccine. It has already become clear that the world is in the standby mode until scientists develop a vaccine that will return people to their normal lives.

    It’s a simple cause-and-effect relationship: both global economy and local economies depend on consistent production, production depends on open and functioning production facilities, which depend on workers, who, in their turn, depend on the vaccine to be able to return to work.

    And while we still have over a year before the COVID-19 vaccine becomes available to the wide public, scientists turn to AI to speed up the process.

    How can AI help develop the COVID-19 vaccine?

    • With the help of AI, scientists can analyze the structure of the virus and how it attaches itself to human cells, i.e., its behavior. This data helps researchers build the foundation for vaccine development.
    • AI and data science become part of the vaccine development process, as they help scientists analyze thousands of research papers on the matter to make their approach to the vaccine more precise.

    An important part of developing a vaccine is analyzing and understanding the protein of the virus and its genetic sequence. In January 2020, Google DeepMind launched a system that builds the virus’s protein in the 3D mode, AlphaFold. This invention already helped the U.S. scientists study the virus enough to create a trial vaccine and launch clinical trials this week.

    However, scientists are looking into the ways, how AI can not only be involved in gathering information, but also in the very process of creating a vaccine.

    There have already been cases of drugs successfully created by AI. The British startup Excienta created its first drug with the help of artificial intelligence algorithms. The drug is currently undergoing clinical trials. But it will take this drug only 12 months to be ready, compared to 5 years that it usually takes.

    Thus, AI gives the world hope that the long-awaited COVID-19 vaccine will be available to the world faster than it’s currently predicted. Yet, there are still a few problems of artificial intelligence implementation in this process, which are mainly connected to AI being underdeveloped itself.

    3. Data science and the fight against misinformation

    Another factor, which is mostly related to how people respond to the current crisis, and yet has the most negative effect on the global economy, is panic.

    We’ve already seen the effects of the rising panic during the Ebola virus crisis in Africa when local economies suffered from plummeting sectors like tourism and commerce.

    In economics, the period between the boom (the rising demand for the product) and the bust (a drop in product availability) is very short. During the current pandemic, we’ve seen quite a few examples of how panic buying led to low supply, which damaged local economies.

    How can data scientists tackle the threat of panic?

    The answer is already in the question: with data.

    One of the reasons why people panic is misinformation. “Our online poll has shown that only 12% of respondents read authoritative COVID-19-related resources, while others mostly relied on word-of-mouth approach,” says Martin Harris, a researcher at Studicus.

    Misinformation, unfortunately, happens not only among people but on the government level as well. One of the best examples of it is the U.S. officials promoting a drug against malaria as an effective method to treat COVID-19 patients, when, in fact, the effectiveness of this drug hasn’t been proven yet.

    The best solution to treat the virus of panic and misinformation is to accumulate all the information from the authoritative resources on the COVID-19 pandemic to help people observe it not only on the local but on the global level as well.

    Data scientists and developers at Boston Children’s Hospital have created such a system, called HealthMap, to help people track COVID-19 pandemic, as well as other disease outbreaks around the world.


    While there are already quite a few applications of AI and data science that help us respond to the COVID-19 crisis, this crisis is still in its early stages of development.

    As we already can use data science to accumulate important information regarding critical hospital staff and equipment, fight misinformation, and use AI to develop the vaccine, we still might discover new ways of applying AI and data science to help the world respond to the COVID-19 crisis.

    Yet, today, we can already say that AI and data science have been of enormous help in fighting the pandemic, giving us hope that we will return to our normal lives as soon as possible.

    Author: Estelle Liotard

    Source: In Data Labs

  • 3 Predicted trends in data analytics for 2021

    3 Predicted trends in data analytics for 2021

    It’s that time of year again for prognosticating trends and making annual technology predictions. As we move into 2021, there are three trends data analytics professionals should keep their eyes on: OpenAI, optimized big data storage layers, and data exchanges. What ties these three technologies together is the maturation of the data, AI and ML landscapes. Because there already is a lot of conversation surrounding these topics, it is easy to forget that these technologies and capabilities are fairly recent evolutions. Each technology is moving in the same direction -- going from the concept (is something possible?) to putting it into practice in a way that is effective and scalable, offering value to the organization.

    I predict that in 2021 we will see these technologies fulfilling the promise they set out to deliver when they were first conceived.

    #1: OpenAI and AI’s Ability to Write

    OpenAI is a research and deployment company that last year released what they call GPT3 -- artificial intelligence that generates text that mimics text produced by humans. This AI offering can write prose for blog posts, answer questions as a chatbot, or write software code. It’s risen to a level of sophistication where it is getting more difficult to discern if what it generated was written by a human or a robot. Where this type of AI is familiar to people is in writing email messages; Gmail anticipates what the user will write next and offers words or sentence prompts. GPT3 goes further: the user can create a title or designate a topic and GPT3 will write a thousand-word blog post.

    This is an inflection point for AI, which, frankly, hasn’t been all that intelligent up to now. Right now, GPT3 is on a slow rollout and is being used primarily by game developers enabling video gamers to play, for example, Dungeons and Dragons without other humans.

    Who would benefit from this technology? Anyone who needs content. It will write code. It can design websites. It can produce articles and content. Will it totally replace humans who currently handle these duties? Not yet, but it can offer production value when an organization is short-staffed. As this technology advances, it will cease to feel artificial and will eventually be truly intelligent. It will be everywhere and we’ll be oblivious to it.

    #2: Optimized Big Data Storage Layers

    Historically, massive amounts of data have been stored in the cloud, on hard drives, or wherever your company holds information for future use. The problem with these systems has been finding the right data when needed. It hasn’t been well optimized, and the adage “like looking for a needle in the haystack” has been an accurate portrayal of the associated difficulties. The bigger the data got, the bigger the haystack got, and the harder it became to find the needle.

    In the past year, a number of technologies have emerged, including Iceberg, Hudi, and Delta Lake, that are optimizing the storage of large analytics data sets and making it easier to find that needle. They organize the hay in such a way that you only have to look at a small, segmented area, not the entire data haystack, making the search much more precise.

    This is valuable not only because you can access the right data more efficiently, but because it makes the data retrieval process more approachable, allowing for widespread adoption in companies. Traditionally, you had to be a data scientist or engineer and had to know a lot about underlying systems, but these optimized big data storage layers make it more accessible for the average person. This should decrease the time and cost of accessing and using the data.

    For example, Iceberg came out of an R&D project at Netflix and is now open source. Netflix generates a lot of data, and if an executive wanted to use that data to predict what the next big hit will be in its programming, it could take three engineers upwards of four weeks to come up with an answer. With these optimized storage layers, you can now get answers faster, and that leads to more specific questions with more efficient answers.

    #3: Data Exchanges

    Traditionally, data has stayed siloed within an organization and never leaves. It has become clear that another company may have valuable data in their silo that can help your organization offer a better service to your customers. That’s where data exchanges come in. However, to be effective, a data exchange needs a platform that offers transparency, quality, security, and high-level integration.

    Going into 2021 data exchanges are emerging as an important component of the data economy, according to research from Eckerson Group. According to this recent report, “A host of companies are launching data marketplaces to facilitate data sharing among data suppliers and consumers. Some are global in nature, hosting a diverse range of data sets, suppliers, and consumers. Others focus on a single industry, functional area (e.g., sales and marketing), or type of data. Still, others sell data exchange platforms to people or companies who want to run their own data marketplace. Cloud data platform providers have the upper hand since they’ve already captured the lion’s share of data consumers who might be interested in sharing data.”

    Data exchanges are very much related to the first two focal points we already mentioned, so much so that data exchanges are emerging as a must-have component of any data strategy. Once you can store data more efficiently, you don’t have to worry about adding greater amounts of data, and when you have AI that works intelligently, you want to be able to use the data you have on hand to fill your needs.

    We might reach a point where Netflix isn’t just asking the technology what kind of content to produce but the technology starts producing the content. It uses the data it collects through the data exchanges to find out what kind of shows will be in demand in 2022, and then the AI takes care of the rest. It’s the type of data flow that today might seem far-fetched, but that’s the direction we’re headed.

    A Final Thought

    One technology is about getting access, one is understanding new data, and one is executing information based on the data. As these three technologies begin to mature, we can expect to see a linear growth pattern and see them all intersect at just the right time.

    Author: Nick Jordan

    Source: TDWI

  • 5 Predictions for Artificial Intelligence in 2016

    AIGet ready to work alongside smart machines

     At Narrative Science, we love making predictions about innovation, technology and, in particular, the rise of artificial intelligence. We may be a bit too optimistic about the timing of certain technologies going mainstream, but we can’t help it. We are wildly optimistic about the future and genuinely believe that we have entered a dramatically new era of artificial intelligence innovation. That said, this year, we tried to focus our predictions on the near-term. Here’s our best guess as to what will happen in 2016.

    1. New inventions using AI will explode.

    In 2015, artificial intelligence went mainstream. Major tech companies including Google, Facebook, Amazon and Twitter made huge investments in AI, almost all of technology research company Gartner’s strategic predictions included AI, and headlines declared that AI-driven technologies were the next big disruptor to enterprise software. In addition, companies that made huge strides in AI, including Facebook, Microsoft and Google, open-sourced their tools. This makes it likely that in 2016, new inventions will increasingly come to market from companies discovering new ways to apply AI versus building it. With entrepreneurs now having access to low-cost quality AI technologies to create new products, we’ll also likely see an explosion in new startups using AI.

    2. Employees will work alongside smart machines.

    Smart machines will augment work and help employees be more productive, not replace them. Analytics industry leader, Tom Davenport, stated it well when he predicted that “smart leaders will realize that augmentation—combining smart humans with smart machines—is a better strategy than automation.”

    3. Executives will demand transparency.

    Business leaders will realize that smart machines throwing out answers without explanation are of little use. If you walked into a CEO’s office and said we need to shut down three factories, the first question from the CEO would be: “Why?” Just producing a result isn’t enough, and communication capabilities will increasingly be built into advanced analytics and intelligent systems so that these systems can explain how they are arriving at their answers.

    4. Artificial Intelligence will reshape companies outside of IT.

    AI-powered business applications will start to infiltrate companies other than technology firms. Employees, teams and entire departments will champion process re-engineering efforts with these intelligent systems whether they realize it or not. As each individual app eliminates a task, employees will automate many of the mundane parts of their jobs and assemble their own stack of AI-powered apps. Teammates eager to be productive and stay competitive will follow, along with team managers who are looking to execute on cost-cutting efforts.

    5. Innovation labs will become a competitive asset.

    With the pace of innovation accelerating, large organizations in industries such as retail, insurance and government will focus even more energies on remaining competitive and discovering the next big thing by forming innovation labs. Innovation labs have existed for some time, but in 2016, we’ll begin to see more resources devoted to innovation labs and more technologies discovered in the labs actually implemented across different company functions and business lines.

    2016 will be a big year for AI. Much of the work in AI in 2016 will be the catalyst for rapid acceleration of the development and adoption of AI-powered applications. In addition and perhaps even more significant, 2016 will bring about a major shift in the perception of AI. It will cease to be a scary, abstract set of ideas and concepts and will be better understood and accepted as more people realize the potential of AI to augment what we do and make our lives more productive.

    Source: Time

  • 6 Changes in the jobs of marketers and market analysts caused by AI

    6 Changes in the jobs of marketers and market analysts caused by AI

    Artificial intelligence is having a profound impact on the state of marketing in 2019. And AI technology will be even more influential in the years to come.

    If you’re a marketer or a business owner in today’s competitive marketplace, you’ve probably tried just about everything you can think of to maximize your success. You’ve dabbled in digital marketing, visited trade shows, paid for print advertising, and incentivized customer testimonials. It’s probably resulted in lots of stress, sleepless nights, and even CBD oil drops to give you the energy and focus to keep going.

    Marketing requires multiple approaches to succeed, so while you should stick with the things you’ve been doing successfully, you’ll also want to include artificial intelligence (AI) in your current strategy. If you haven’t already, you might be falling behind your competitors. As many as 80% of marketers believe that AI will be the most effective tool by 2020.

    AI marketing techniques increase the efficiency of your marketing and are often more effective than some of the traditional tactics you may be using. You’ll combine big data and inbound marketing to deliver a practical marketing strategy that drives conversions. Here are some ways you can apply this seemingly magical tool:

    1. Customer personas

    The most basic rule about marketing is that you can’t hope to run successful campaigns if you don’t know who you’re targeting. A good marketer will create customer personas that tell you who your target market is and how you can best service them. Personas are made at the basic level by listing demographics, interests, and other information that can help you target an audience.

    About 53% of marketers say that AI is extremely useful in identifying customers. It provides information that you might not have otherwise considered when drafting a marketing strategy. This is extremely valuable since more specific information leads to more effective marketing.

    In order to capture this essential data, look through your company analytics. Define the demographics of those who follow you on social media, make purchases on your website, and comment or inquire about your products/services. This essential data can develop a more profound persona designed to target the right customer base.

    2. Digital advertising campaigns

    Many marketers have heard about the essentials of a digital advertising campaign in furthering sales, but they haven’t seen the results they hoped for. Artificial intelligence can significantly improve these campaigns. Once you’ve created a comprehensive view of your customer base, you’ll experience far more effective digital advertising campaigns.

    A great example of this is Facebook advertising, which is named by many marketing experts as the best bang for your buck. It allows you to create advertisements that are specifically targeted towards those who are most likely to make a purchase. However, it only works if you know exactly who your target audience is.

    Thanks to the abundance of consumer data collected by websites, social sites, and keyword searches, you’ll have all the information you need for more effective digital ads.

    3. Automated e-mail and SMS campaigns

    E-mail and SMS marketing are considered some of the best lead-generating marketing tactics out there. E-mail is the number-one source of business communication with 86% of consumers and business professionals reporting it as their preferred source. More importantly for sales, nearly 60% say it’s their most effective channel for revenue generation.

    SMS marketing, although not as popular as e-mail among marketers, boasts similar data for millennial clients, or those aged 18-36. Thanks to AI, we know that about 83% of millennials open an SMS message within 90 seconds of receiving it. Three quarters say they prefer SMS for promotions, surveys, reminders, and similar communications from brands.

    With the help of AI, we do not only understand the essentials of e-mail and SMS for marketing, but we have the insights that help to make it better. AI-enabled tools facilitate targeted campaigns to a specific audience. They handle the busy work behind these campaigns so that you can focus more on developing products and customer service.

    4. Market research

    Savvy marketers begin every new campaign with market research, gathering information about customers, effective marketing strategies, and trends in the industry. This information is invaluable for directing campaigns effectively and making products more appealing to the intended audience.

    Big data provides all that information for you, although it’s difficult to understand it all on the surface. There’s so much information that you’ll need analytics tools to decipher the most useful data that can be used to direct your marketing efforts.

    Once you’ve broadened your horizons with data-deciphering tools, you’ll have an easier time interpreting customer emotions and their perceptions of your brand. You’ll be able to make changes or continue implementing an effective strategy with this insightful information.

    5. User experience

    As business owners know, it’s all about the user experience. A good marketing campaign begins with a website and advertisements designed specifically for customers’ benefits. In fact, customers are beginning to demand information, products, and services at lightning speed. AI can help you give that to them.

    One example is the use of chatbots for customer service. When customers reach out to you on Facebook Messenger, for example, you can set up a chatbot to respond immediately and let them know you’ll be with them shortly.

    Another example is personalization that comes through AI. As you know your audience better, you can set your advertisements and website experiences to be catered to the individual. Each time they log onto your website, they’ll be greeted by name and advertisements all over the web will show them only the things they’re interested in seeing. e-mail marketing will improve with personalization as well.

    Social media and Google advertisements are all about catering more directly to the user experience. The data you collect about individual consumers all but guarantees your ads will be shown to the right people.

    6. Sales forecasting

    Fruitful marketing drives sales, a metric that’s easier to forecast and understand with the use of AI. Marketers can use all the information derived from inbound communication and compare it to traditional metrics in order to determine updates and improvements for sales strategies.

    It can show you a forecasting of the results of a certain metric, so you can determine if it’s worth the expense to do so. This can save marketers significant time and money in the industry all while driving more sales and growth as a result.

    AI is redefining the state of marketing

    Artificial intelligence is having a profound impact on the state of marketing in 2019. And AI technology will be even more influential in the years to come. Make sure that you understand its impact and find ways to utilize it to its full potential.

    Author: Diana Hope

    Source: SmartDataCollective

  • 9 Data issues to deal with in order to optimize AI projects

    9 Data issues to deal with in order to optimize AI projects

    The quality of your data affects how well your AI and machine learning models will operate. Getting ahead of these nine data issues will poise organizations for successful AI models.

    At the core of modern AI projects are machine-learning-based systems which depend on data to derive their predictive power. Because of this, all artificial intelligence projects are dependent on high data quality.

    However, obtaining and maintaining high quality data is not always easy. There are numerous data quality issues that threaten to derail your AI and machine learning projects. In particular, these nine data quality issues need to be considered and prevented before issues arise.

    1. Inaccurate, incomplete and improperly labeled data

    Inaccurate, incomplete or improperly labeled data is typically the cause of AI project failure. These data issues can range from bad data at the source to data that has not been cleaned or prepared properly. Data might be in the incorrect fields or have the wrong labels applied.

    Data cleanliness is such an issue that an entire industry of data preparation has emerged to address it. While it might seem an easy task to clean gigabytes of data, imagine having petabytes or zettabytes of data to clean. Traditional approaches simply don't scale, which has resulted in new AI-powered tools to help spot and clean data issues.

    2. Having too much data

    Since data is important to AI projects, it's a common thought that the more data you have, the better. However, when using machine learning sometimes throwing too much data at a model doesn't actually help. Therefore, a counterintuitive issue around data quality is actually having too much data.

    While it might seem like too much data can never be a bad thing, more often than not, a good portion of the data is not usable or relevant. Having to go through to separate useful data from this large data set wastes organizational resources. In addition, all that extra data might result in data "noise" that can result in machine learning systems learning from the nuances and variances in the data rather than the more significant overall trend.

    3. Having too little data

    On the flip side, having too little data presents its own problems. While training a model on a small data set may produce acceptable results in a test environment, bringing this model from proof of concept or pilot stage into production typically requires more data. In general, small data sets can produce results that have low complexity, are biased or too overfitted and will not be accurate when working with new data.

    4. Biased data

    In addition to incorrect data, another issue is that the data might be biased. The data might be selected from larger data sets in ways that doesn't appropriately convey the message of the wider data set. In other ways, data might be derived from older information that might have been the result of human bias. Or perhaps there are some issues with the way that data is collected or generated that results in a final biased outcome.

    5. Unbalanced data

    While everyone wants to try to minimize or eliminate bias from their data sets, this is much easier said than done. There are several factors that can come into play when addressing biased data. One factor can be unbalanced data. Unbalanced data sets can significantly hinder the performance of machine learning models. Unbalanced data has an overrepresentation of data from one community or group while unnecessarily reducing the representation of another group.

    An example of an unbalanced data set can be found in some approaches to fraud detection. In general, most transactions are not fraudulent, which means that only a very small portion of your data set will be fraudulent transactions. Since a model trained on this fraudulent data can receive significantly more examples from one class versus another, the results will be biased towards the class with more examples. That's why it's essential to conduct thorough exploratory data analysis to discover such issues early and consider solutions that can help balance data sets.

    6. Data silos

    Related to the issue of unbalanced data is the issue of data silos. A data silo is where only a certain group or limited number of individuals at an organization have access to a data set. Data silos can result from several factors, including technical challenges or restrictions in integrating data sets as well as issues with proprietary or security access control of data.

    They are also the product of structural breakdowns at organizations where only certain groups have access to certain data as well as cultural issues where lack of collaboration between departments prevents data sharing. Regardless of the reason, data silos can limit the ability of those at a company working on artificial intelligence projects to gain access to comprehensive data sets, possibly lowering quality results.

    7. Inconsistent data

    Not all data is created the same. Just because you're collecting information, that doesn't mean that it can or should always be used. Related to the collection of too much data is the challenge of collecting irrelevant data to be used for training. Training the model on clean, but irrelevant data results in the same issues as training systems on poor quality data.

    In conjunction with the concept of data irrelevancy is inconsistent data. In many circumstances, the same records might exist multiple times in different data sets but with different values, resulting in inconsistencies. Duplicate data is one of the biggest problems for data-driven businesses. When dealing with multiple data sources, inconsistency is a big indicator of a data quality problem.

    8. Data sparsity

    Another issue is data sparsity. Data sparsity is when there is missing data or when there is an insufficient quantity of specific expected values in a data set. Data sparsity can change the performance of machine learning algorithms and their ability to calculate accurate predictions. If data sparsity is not identified, it can result in models being trained on noisy or insufficient data, reducing the effectiveness or accuracy of results.

    9. Data labeling issues

    Supervised machine learning models, one of the fundamental types of machine learning, require data to be labeled with correct metadata for machines to be able to derive insights. Data labeling is a hard task, often requiring human resources to put metadata on a wide range of data types. This can be both complex and expensive. One of the biggest data quality issues currently challenging in-house AI projects is the lack of proper labeling of machine learning training data. Accurately labeled data ensures that machine learning systems establish reliable models for pattern recognition, forming the foundations of every AI project. Good quality labeled data is paramount to accurately training the AI system on what data it is being fed.

    Organizations looking to implement successful AI projects need to pay attention to the quality of their data. While reasons for data quality issues are many, a common theme that companies need to remember is that in order to have data in the best condition possible, proper management is key. It's important to keep a watchful eye on the data that is being collected, run regular checks on this data, keep the data as accurate as possible, and get the data in the right format before having machine learning models learn on this data. If companies are able to stay on top of their data, quality issues are less likely to arise.

    Author: Kathleen Walch

    Source: TechTarget

  • A brief look into Reinforcement Learning

    A brief look into Reinforcement Learning

    Reinforcement Learning (RL) is a very interesting topic within Artificial Intelligence, and the concept is quite fascinating. In this post I will try to give a nice initial picture for those who want to know more about RL.

    What is Reinforcement Learning?

    Conceptually, RL is a framework that describes systems (here called agents) that are able to learn how to interact with the surrounding environment only by means of gathered experience. After each action (or interaction), the agent earns some reward, a feedback from the environment that quantify the quality of that given action.

    Humans learn by the same principle. Think about a baby walking around. For this bay, everything is new. How can a baby know that grabbing something hot is dangerous? Of course, after touching this hot object he can get a painful burn. With this bad reward (or punishment) the baby will learn that it is good to avoid touching anything too hot.

    It is important to point out that the terms agent and environment must be interpreted in a broader sense. It is easier to visualize the agent as something like a robot and the environment as the place where it is situated in. This is a right analogy, however it can be much more complex. I like to think that the agent is like a controller in a closed loop system: It is basically an algorithm responsible for making decisions. The environment can be anything that the agent interacts with.

    A simple example to help you understand

    For a better understanding I will use a simple example here. Imagine a wheeled robot inside of a maze, trying to learn how to reach a goal marker. However, some obstacles are in its way. The aim is that the agent learns how to reach the goal without crashing into the obstacles. So, let's highlight the main components that compose this RL problem:

    • Agent: The decision making system. The robot, in our example.
    • Environment: A system which the agent interacts with. The maze, in this case.
    • State: For the agent to choose how to behave, it is necessary to estimate the environment state. For each state, it should exist an optimal action for the agent to choose. It can be the robot position, or some obstacle detected by the sensors.
    • Action: This is how the agent interacts with the environment. Usually there is a finite number of actions that the agent is able to perform. In our example it is the direction that the robot should move to.
    • Reward: It is the feedback that allows the agent to know if the action was good or not. A bad reward (it can be a low or negative value) can be also interpreted as a punishment. The main goal of RL algorithms is to maximize the long-term reward. If the robot achieves the goal mark, a big reward should be given. However, if it crashes into an obstacle, a punishment should be given instead.
    • Episode: Most of the RL problems are episodic. The meaning is that it has to exist some event that terminates the episode execution. In our example the episode should finish when the robot reaches the goal or if some time limit is exceeded (to avoid the robot to stay still forever).

    Usually, it is supposed that the agent has no previous knowledge about the environment. Therefore, in the beginning actions will be chosen randomly. For each wrong decision the agent will be punished (for example, by crashing into an obstacle). Good decisions will be rewarded, on the other hand. The learning happens by the agent figuring out how to avoid getting into situations where punishment may occur and choosing actions that will allow the agent to find the goal.

    The reward accumulated in each episode is expected to increase and can be used to estimate the agent’s learning rate. After many episodes, the robot should be able to know how to behave in order to find the goal marker while avoiding any occasional obstacle with no previous information about the environment. Of course there are many other things to be considered, but let’s keep it simple for now.

    Author: Felp Roza

    Source: Towards Data Science

  • A Shortcut Guide to Machine Learning and AI in The Enterprise


    Predictive analytics / machine learning / artificial intelligence is a hot topic – what’s it about?

    Using algorithms to help make better decisions has been the “next big thing in analytics” for over 25 years. It has been used in key areas such as fraud the entire time. But it’s now become a full-throated mainstream business meme that features in every enterprise software keynote — although the industry is battling with what to call it.

    It appears that terms like Data Mining, Predictive Analytics, and Advanced Analytics are considered too geeky or old for industry marketers and headline writers. The term Cognitive Computing seemed to be poised to win, but IBM’s strong association with the term may have backfired — journalists and analysts want to use language that is independent of any particular company. Currently, the growing consensus seems to be to use Machine Learning when talking about the technology and Artificial Intelligence when talking about the business uses.

    Whatever we call it, it’s generally proposed in two different forms: either as an extension to existing platforms for data analysts; or as new embedded functionality in diverse business applications such as sales lead scoring, marketing optimization, sorting HR resumes, or financial invoice matching.

    Why is it taking off now, and what’s changing?

    Artificial intelligence is now taking off because there’s a lot more data available and affordable, powerful systems to crunch through it all. It’s also much easier to get access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.

    Organizations today have also more comfortable with manipulating business data, with a new generation of business analysts aspiring to become “citizen data scientists.” Enterprises can take their traditional analytics to the next level using these new tools.

    However, we’re now at the “Peak of Inflated Expectations” for these technologies according to Gartner’s Hype Cycle — we will soon see articles pushing back on the more exaggerated claims. Over the next few years, we will find out the limitations of these technologies even as they start bringing real-world benefits.

    What are the longer-term implications?

    First, easier-to-use predictive analytics engines are blurring the gap between “everyday analytics” and the data science team. A “factory” approach to creating, deploying, and maintaining predictive models means data scientists can have greater impact. And sophisticated business users can now access some the power of these algorithms without having to become data scientists themselves.

    Second, every business application will include some predictive functionality, automating any areas where there are “repeatable decisions.” It is hard to think of a business process that could not be improved in this way, with big implications in terms of both efficiency and white-collar employment.

    Third, applications will use these algorithms on themselves to create “self-improving” platforms that get easier to use and more powerful over time (akin to how each new semi-autonomous-driving Tesla car can learn something new and pass it onto the rest of the fleet).

    Fourth, over time, business processes, applications, and workflows may have to be rethought. If algorithms are available as a core part of business platforms, we can provide people with new paths through typical business questions such as “What’s happening now? What do I need to know? What do you recommend? What should I always do? What can I expect to happen? What can I avoid? What do I need to do right now?”

    Fifth, implementing all the above will involve deep and worrying moral questions in terms of data privacy and allowing algorithms to make decisions that affect people and society. There will undoubtedly be many scandals and missteps before the right rules and practices are in place.

    What first steps should companies be taking in this area?
    As usual, the barriers to business benefit are more likely to be cultural than technical.

    Above all, organizations need to make sure they have the right technical expertise to be able to navigate the confusion of new vendors offers, the right business knowledge to know where best to apply them, and the awareness that their technology choices may have unforeseen moral implications.

    Source: timoelliot.com, October 24, 2016


  • A three-stage approach to make your business AI ready

    A three-stage approach to make your business AI ready

    Organizations implementing artificial intelligence (AI) have increased by 270% over the last four years, according to a recent survey by Gartner. Even though the implementation of AI is a growing trend, 63% of organizations haven’t deployed this technology. What is holding them back: cost? talent shortage? something else?

    For many organizations it is the inability to reach the desired confidence level in the algorithm itself. Data science teams often blow their budget, time and resources on AI models that never make it out of the beginning stages of testing. And even if projects make it out of the initial stage, not all projects are successful.

    One example we saw last year was Amazon’s attempt to implement AI in their HR department. Amazon received a huge number of resumes for their thousands of open positions. They hypothesized that they could use machine learning to go through all of the resumes and find the top talent. While the system was able to filter the resumes and apply scores to the candidates, it also showed gender bias. While this proof of concept was approved, they didn’t watch for bias in their training data and the project was recalled.

    Companies want to jump on the “Fourth Industrial revolution” bandwagon and prove that AI will deliver ROI for their businesses. The truth is AI is in its early stages and many companies are just now getting AI ready. For machine learning (ML) project teams that are starting a project for the first time, a deliberate, three-stage approach to project evolution will pave a shortcut to success:

    1. Test the fundamental efficacy of your model with an internal Proof of Concept (POC)

    The point of a POC is to prove that in a certain case it is possible to save money or improve a customer experience using AI. You are not attempting to get the model to the level of confidence needed to deploy it, but just to say (and show) the project can work.

    A POC like this is all about testing things to see if a given approach produces results. There is no sense in making deep investments for a POC. You can use an off-the-shelf algorithm, find open source training data, purchase a sample dataset, create your own algorithm with limited functionality, and/or label your own data. Find what works for you to prove that your project will achieve the intended corporate goal. A successful POC is what is going to get the rest of the project funded.

    In the grand scheme of your AI project, this step is the easiest part of your journey. Keep in mind, as you get further into training your algorithm, you will not be able to use sample data or prepare all of your training data yourself. The subsequent improvements in model confidence required to make your system production ready will take immense amounts of training data.

    2. Prepare the data you’ll need to train your algorithm… and keep going

    In this step the hard work really begins. Let’s say that your POC using pre-labeled data got your model to a 60% confidence. 60% is not ready for primetime. In theory, that could mean that 40 percent of the interactions your algorithm has with customers will be unsatisfactory. How to reach a higher level of confidence? More training data.

    Proving AI will work for your business is a huge step toward implementing it and actually reaping the benefits. But don’t let it lull you into thinking the next 10% confidence is going to be 6x easier than that. The ugly truth is that models have an insatiable appetite for training data and getting from 60% to 70% confidence could take more training data that it took to get to the original 60 percent. The needs become exponential. 

    3. Watch out for possible roadblocks

    Imagine: if it took tens of thousands of labeled images to prove one use case for a successful POC, it is going to take tens of thousands of images for each use case you need your algorithm to learn. How many use cases is that? Hundreds? Thousands? There are edge cases that will continually arise, and each of those will require training data. And on and on. It is understandable that data science teams often underestimate the quantity of training data they will need and attempt to do the labeling and annotating in-house. This could also partially account for why data scientists are leaving their jobs.

    While not enough training data is one common pitfall, there are others. It is essential that you are watching for and eliminating any sample, measurement, algorithm, or prejudicial bias in your training data as you go. You’ll want to implement agile practices to catch these things early and make adjustments.

    And one final thing to keep in mind,=: AI labs, data scientists, AI teams, and training data are expensive. Yet, in a Gartner report that says that AI projects are in the top three priorities, it also states that AI is thirteenth on their list of funding priorities. Yes, you’re going to need a bigger budget.

    Author: Glen Ford

    Source: Dataconomy

  • AI and the risks of Bias

    BIAS cartoon006

    From facial recognition for unlocking our smartphones to speech recognition and intent analysis for voice assistance, artificial intelligence is all around us today. In the business world, AI is helping us uncover new insight from data and enhance decision-making.

    For example, online retailers use AI to recommend new products to consumers based on past purchases. And, banks use conversational AI to interact with clients and enhance their customer experiences.

    However, most of the AI in use now is “narrow AI,” meaning it is only capable of performing individual tasks. In contrast, general AI – which is not available yet – can replicate human thought and function, taking emotions and judgment into account. 

    General AI is still a way off so only time will tell how it will perform. In the meantime, narrow AI does a good job at executing tasks, but it comes with limitations, including the possibility of introducing biases.  

    AI bias may come from incomplete datasets or incorrect values. Bias may also emerge through interactions overtime, skewing the machine’s learning. Moreover, a sudden business change, such as a new law or business rule, or ineffective training algorithms can also cause bias. We need to understand how to recognize these biases, and design, implement and govern our AI applications in order to make sure the technology generates its desired business outcomes.

    Recognize and evaluate bias – in data samples and training

    One of the main drivers of bias is the lack of diversity in the data samples used to train an AI system. Sometimes the data is not readily available or it may not even exist, making it hard to address all potential use cases.

    For instance, airlines routinely apply sensor data from in-flight aircraft engines through AI algorithms to predict needed maintenance and improve overall performance. But if the machine is trained with only data from flights over the Northern Hemisphere and then applied to a flight across sub-Saharan Africa, the conditions will provide inaccurate results. We need to evaluate the data used to train these systems and strive for well-rounded data samples.

    Another driver of bias is incomplete training algorithms. For example, a chatbot designed to learn from conversations may be exposed to politically incorrect language. Unless trained not to, the chatbot may start using the same language with consumers, which Microsoft unfortunately learned in 2016 with its now-defunct Twitter bot, “Tay.” If a system is incomplete or skewed through learning like Tay, then teams have to adjust the use case and pivot as needed.

    Rushed training can also lead to bias. We often get excited about introducing AI into our businesses so naturally want to start developing projects and see some quick wins. 

    However, early applications can quickly expand beyond their intended purpose. Given that current AI cannot cover the gamut of human thought and judgement, eliminating emerging biases becomes a necessary task. Therefore, people will continue to be important in AI applications. Only people have the domain knowledge – acquired industry, business, and customer knowledge – needed to evaluate the data for biases and train the models accordingly.

    Diversify datasets and the teams working with AI

    Diversity is the key to mitigating AI biases – diversity in the datasets and the workforce working day to day with the models. As stated above, we need to have comprehensive, well-rounded datasets that can broadly cover all possible use cases. If there is underrepresented or disproportionate internal data, such as if the AI only has homogenous datasets, then external sources may fill in the gaps in information. This gives the machine a richer pool of data to learn and work with – and leads to predictions that are far more accurate. 

    Likewise, diversity in the teams working with AI can help mitigate bias. When there is only a small group within one department working on an application, it is easy for the thinking of these individuals to influence the system’s design and algorithms. Starting with a diverse team or introducing others into an existing group can make for a much more holistic solution. A team with varying skills, thinking, approaches and backgrounds is better equipped to recognize existing AI bias and anticipate potential bias. 

    For example, one bank used AI to automate 80 percent of its financial spreading process for public and private companies. It involved extracting numbers out of documents and formatting them into templates, while logging each step along the way. To train the AI and make sure the system pulled the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decisioning expertise. Today, it applies AI to spreading on 45,000 customer accounts across 35 countries.

    Consider emerging biases and preemptively train the machine

    While AI can introduce biases, proper design (including the data samples and models) and thoughtful usage (such as governance over the AI’s learning) can help reduce and prevent them. And, in many situations, AI can actually minimize bias that would otherwise be present in human decision-making. An objective algorithm can compensate for the natural bias that a human might introduce such in approving a customer for a loan based on their appearance.

    In recruiting, an AI program can review job descriptions to eliminate unconscious gender biases by flagging and removing words that may be construed as more masculine or feminine, and replacing them with more neutral terms. It is important to note that a domain expert needs to go in and make sure the changes are still accurate, but the system can recognize things that people could miss. 

    Bias is an unfortunate reality in today’s AI applications. But by evaluating the data samples and training algorithms and making sure that both are comprehensive and complete, we can mitigate unintended biases. We need to task diverse teams with governing the machines to prevent unwanted outcomes. With the right protocol and measures, we can ensure that AI delivers on its promise and yields the best business results

     Author: Sanjay Srivastava

    Source: Information Management

  • AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    AI omzetten in een succesvolle strategie: 8 tips voor marketeers

    Artificial Intelligence (AI) zou het belangrijkste aspect moeten zijn van een datastrategie. Dat vindt meer dan 60 procent van de marketeers, blijkt uitonderzoek van MemSQL. Maar het daadwerkelijk inzetten van AI blijkt een ander verhaal. Hoe kunnen bedrijven AI omzetten in een succesvolle strategie? Hier volgen 8 tips voor marketeers:

    1. Recommendation engines

    Richt je op upselling door recommendation engines in te zetten. Recommendation engines zijn gebouwd om te voorspellen wat gebruikers op basis van hun zoektermen verder interessant zouden kunnen vinden, met name als er veel keuze is. Recommendatin engines tonen gebruikers informatie of inhoud die ze anders misschien niet hadden gezien, wat uiteindelijk kan leiden tot hogere inkomsten uit meer verkopen. Naarmate er meer bekend is over een bezoeker, is een steeds betere aanbeveling te doen en daarmee wordt de verkoopkans steeds groter. Zo is meer dan 80 procent van de programma’s die mensen kijken op Netflix door hen gevonden via de recommendation engine. Hoe dit werkt? Ten eerste verzamelt Netflix alle data van zijn gebruikers. Wat kijken ze? Wat keken ze vorig jaar? Welke series kijken na elkaar? En ga zo maar door. Bovendien is er een groep freelance en in house taggers actief, die alle content van beoordelingen en tags voorzien. Speelt een serie zich af in de ruimte of is de held een politieman? Alles krijgt een tag. Vervolgens worden machine learning algoritmes losgelaten op deze gecombineerde data en worden kijkers opgedeeld in meer dan 2000 verschillende ‘smaakgroepen’. De groep waarin een gebruiker is ingedeeld bepaalt welke kijkvoorstellen hij/zij krijgt.

    2. Forecasting

    Goede salesprognoses helpen bedrijven te groeien. Maar voorspellingen worden al jarenlang door mensen gedaan, terwijl emoties een kwartaal kunnen maken of breken. Zonder wetenschap zijn voorspellingen vaak ofwel overdreven optimistisch, ofwel overdreven pessimistisch. AI kan helpen met forecasting louter gebaseerd op gegevens en feiten. Deze gevgevens en feiten zijn met dank aan AI ook uit te leggen, waardoor bedrijven kunnen leren van eerdere voorspellingen en de volgende prognose alleen maar nauwkeuriger wordt.

    3. Ga ‘churn’ tegen

    Zoals iedere marketeer weet is het werven van nieuwe klanten veel duurder dan het behouden van de huidige klanten. Maar hoe voorkom je dat klanten zich uitschrijven voor je diensten of kiezen voor andere oplossingen? Zorg dat je klanten die de website willen verlaten steeds beter begrijpt en hun gedrag kunt voorspellen, want daarmee is klantverlies te minimaliseren. Wanneer je klanten die op het punt staan je website te verlaten effectief aanspreekt, vergroot je de kans op conversie. Door met behulp van AI een voorspellend analysemodel te bouwen dat potentiële ‘churners’ detecteert en hier vervolgens een marketingcampagne op in te zetten, voorkom je klantverlies en kun je veranderingen in je product aanbrengen om churn tegen te gaan.

    4. Content generation

    Content blijft koning. En daar kun je op inspelen met Natural Language Processing (NLP). Dit is de vaardigheid van een computerprogramma om menselijke taal te begrijpen. NLP zal zich in de nabije toekomst steeds verder ontwikkelen en wordt meer mainstream. Doordat computers taal steeds beter begrijpen, kan simpele content steeds beter automatisch gegenereerd worden. Dat content enorm belangrijk blijft, blijkt uit onderzoek van het Content Marketing Institute (CMI). Content marketing blijkt wel drie keer zo veel leads per uitgegeven dollar op te leveren als het genereren van betaalde zoekopdrachten! Bovendien kost content marketing minder terwijl het tegelijkertijd grotere langetermijnvoordelen biedt.

    5. Hyper-Targeted advertising

    Klanten hebben steeds meer toegang tot informatie en worden met een overschot aan keuzes minder loyaal aan een product of merk. De klantervaring die een bedrijf biedt is steeds belangrijker, dus ook advertenties moeten aanvoelen als een persoonlijk aanbod. Uit onderzoek van SalesForce blijkt dat 51 procent van de consumenten verwacht dat bedrijven rond 2020 zullen anticiperen op hun behoeften en actief relevante suggesties doen, oftewel hyper-targeted advertising inzetten. Zet daarom AI in voor data-driven klantsegmentatie en maak advertenties steeds relevanter per doelgroep.

    6. Prijsoptimalisatie

    McKinsey schat dat zo’n 30% van alle prijsbeslissingen die bedrijven elk jaar maken niet leiden tot de optimale prijs. Om concurrerend te blijven is het van belang continu het evenwicht te vinden tussen wat klanten willen betalen voor een product/dienst en wat de winstmarges aan kunnen. Grote bedrijven tonen aan dat prijsoptimalisatie vaak cruciaal is voor hun succes. Naar verluidt wijzigt Walmart zijn prijzen wel meer dan 50.000 keer per maand. Door met behulp van AI dynamische prijsbepaling in te zetten, zijn prijzen continu te updaten op basis van veranderende factoren en ben je niet meer afhankelijk van statische gegevens.

    7. Scoor betere leads

    Zet voorspellende lead scoring in om betere leads te scoren en daarmee alle pijlen te richten op diegenen die het meest waarschijnlijk zullen kopen. Uit een IDC-enquête blijkt dat 83 procent van de bedrijven voorspellende lead scoring voor verkoop en marketing al gebruikt of van plan is te gebruiken. En met de hulp van AI is daar een enorme slag in te slaan. Voorspellende lead scoring is speciaal ontwikkeld om te bepalen welke criteria bij een goede lead horen. Het maakt gebruik van algoritmes die vast kunnen stellen welke eigenschappen geconverteerde leads en niet-geconverteerde leads met elkaar gemeen hebben. Met die kennis kan lead scoring-software verschillende modellen voor voorspellende lead scoring maken en testen, en vervolgens automatisch het model kiezen dat het meest geschikt is voor een set voorbeeldgegevens. Omdat lead scoring-software ook machine learning gebruikt worden lead scores steeds nauwkeuriger.

    8. Marketingattributie

    En tot slot: begrijp tot op de details waar de beste (en slechtste) conversies vandaan komen, zodat je hiermee aan de slag kunt gaan. Met conversieattributie is goed te meten via welke website, zoekmachine, advertentie etc. een bezoeker op jouw website kwam en hier wel of niet een bestelling plaatste. Met behulp va machine learning kun je een slimmer marketingattributiesysteem bouwen, waarmee precies geïdentificeerd kan worden wat individuen beïnvloedt om gewenst gedrag te vertonen. In dit geval is overgaan tot koop het gewenste gedrag. Een goed marketingattributiesysteem met behulp van AI kan dus zorgen voor meer conversie.

    Auteur: Hylke Visser

    Bron: Emerce

  • An overview of Morgan Stanley's surge toward data quality

    An overview of Morgan Stanley's surge toward data quality

    Jeff McMillan, chief analytics and data officer at Morgan Stanley, has long worried about the risks of relying solely on data. If the data put into an institution's system is inaccurate or out of date, it will give customers the wrong advice. At a firm like Morgan Stanley, that just isn't an option.

    As a result, Morgan Stanley has been overhauling its approach to data. Chief among them is that it wants to improve data quality in core business processing.

    “The acceleration of data volume and the opportunity this data presents for efficiency and product innovation is expanding dramatically,” said Gerard Hester, head of the bank’s data center of excellence. “We want to be sure we are ahead of the game.”

    The data center of excellence was established in 2018. Hester describes it as a hub with spokes out to all parts of the organization, including equities, fixed income, research, banking, investment management, wealth management, legal, compliance, risk, finance and operations. Each division has its own data requirements.

    “Being able to pull all this data together across the firm we think will help Morgan Stanley’s franchise internally as well as the product we can offer to our clients,” Hester said.

    The firm hopes that improved data quality will let the bank build higher quality artificial intelligence and machine learning tools to deliver insights and guide business decisions. One product expected to benefit from this is the 'next best action' the bank developed for its financial advisers.

    This next best action uses machine learning and predictive analytics to analyze research reports and market data, identify investment possibilities, and match them to individual clients’ preferences. Financial advisers can choose to use the next best action’s suggestions or not.

    Another tool that could benefit from better data is an internal virtual assistant called 'ask research'. Ask research provides quick answers to routine questions like, “What’s Google’s earnings per share?” or “Send me your latest model for Google.” This technology is currently being tested in several departments, including wealth management.

    New data strategy

    Better data quality is just one of the goals of the revamp. Another is to have tighter control and oversight over where and how data is being used, and to ensure the right data is being used to deliver new products to clients.

    To make this happen, the bank recently created a new data strategy with three pillar. The first is working with each business area to understand their data issues and begin to address those issues.

    “We have made significant progress in the last nine months working with a number of our businesses, specifically our equities business,” Hester said.

    The second pillar is tools and innovation that improve data access and security. The third pillar is an identity framework.

    At the end of February, the bank hired Liezel McCord to oversee data policy within the new strategy. Until recently, McCord was an external consultant helping Morgan Stanley with its Brexit strategy. One of McCord’s responsibilities will be to improve data ownership, to hold data owners accountable when the data they create is wrong and to give them credit when it’s right.

    “It’s incredibly important that we have clear ownership of the data,” Hester said. “Imagine you’re joining lots of pieces of data. If the quality isn’t high for one of those sources of data, that could undermine the work you’re trying to do.”

    Data owners will be held accountable for the accuracy, security and quality of the data they contribute and make sure that any issues are addressed.

    Trend of data quality projects

    Arindam Choudhury, the banking and capital markets leader at Capgemini, said many banks are refocusing on data as it gets distributed in new applications.

    Some are driven by regulatory concerns, he said. For example, the Basel Committee on Banking Supervision's standard number 239 (principles for effective risk data aggregation and risk reporting) is pushing some institutions to make data management changes.

    “In the first go-round, people complied with it, but as point-to-point interfaces and applications, which was not very cost effective,” Choudhury said. “So now people are looking at moving to the cloud or a data lake, they’re looking at a more rationalized way and a more cost-effective way of implementing those principles.”

    Another trend pushing banks to get their data house in order is competition from fintechs.

    “One challenge that almost every financial services organization has today is they’re being disintermediated by a lot of the fintechs, so they’re looking at assets that can be used to either partner with these fintechs or protect or even grow their business,” Choudhury said. “So they’re taking a closer look at the data access they have. Organizations are starting to look at data as a strategic asset and try to find ways to monetize it.”

    A third driver is the desire for better analytics and reports.

    "There’s a strong trend toward centralizing and figuring out, where does this data come from, what is the provenance of this data, who touched it, what kinds of rules did we apply to it?” Choudhury said. That, he said, could lead to explainable, valid and trustworthy AI.

    Author: Penny Crosman

    Source: Information-management

  • Artificial intelligence: Can Watson save IBM?

    160104-Cloud-800x445The history of artificial intelligence has been marked by seemingly revolutionary moments — breakthroughs that promised to bring what had until then been regarded as human-like capabilities to machines. The AI highlights reel includes the “expert systems” of the 1980s and Deep Blue, IBM’s world champion-defeating chess computer of the 1990s, as well as more recent feats like the Google system that taught itself what cats look like by watching YouTube videos.

    But turning these clever party tricks into practical systems has never been easy. Most were developed to showcase a new computing technique by tackling only a very narrow set of problems, says Oren Etzioni, head of the AI lab set up by Microsoft co-founder Paul Allen. Putting them to work on a broader set of issues presents a much deeper set of challenges.
    Few technologies have attracted the sort of claims that IBM has made for Watson, the computer system on which it has pinned its hopes for carrying AI into the general business world. Named after Thomas Watson Sr, the chief executive who built the modern IBM, the system first saw the light of day five years ago, when it beat two human champions on an American question-and-answer TV game show, Jeopardy!
    But turning Watson into a practical tool in business has not been straightforward. After setting out to use it to solve hard problems beyond the scope of other computers, IBM in 2014 adapted its approach.
    Rather than just selling Watson as a single system, its capabilities were broken down into different components: each of these can now be rented to solve a particular business problem, a set of 40 different products such as language-recognition services that amount to a less ambitious but more pragmatic application of an expanding set of technologies.
    Though it does not disclose the performance of Watson separately, IBM says the idea has caught fire. John Kelly, an IBM senior vice-president and head of research, says the system has become “the biggest, most important thing I’ve seen in my career” and is IBM’s fastest growing new business in terms of revenues.
    But critics say that what IBM now sells under the Watson name has little to do with the original Jeopardy!-playing computer, and that the brand is being used to create a halo effect for a set of technologies that are not as revolutionary as claimed.

    “Their approach is bound to backfire,” says Mr Etzioni. “A more responsible approach is to be upfront about what a system can and can’t do, rather than surround it with a cloud of hype.”
    Nothing that IBM has done in the past five years shows it has succeeded in using the core technology behind the original Watson demonstration to crack real-world problems, he says.

    Watson’s case
    The debate over Watson’s capabilities is more than just an academic exercise. With much of IBM’s traditional IT business shrinking as customers move to newer cloud technologies, Watson has come to play an outsized role in the company’s efforts to prove that it is still relevant in the modern business world. That has made it key to the survival of Ginni Rometty, the chief executive who, four years after taking over, is struggling to turn round the company.
    Watson’s renown is still closely tied to its success on Jeopardy! “It’s something everybody thought was ridiculously impossible,” says Kris Hammond, a computer science professor at Northwestern University. “What it’s doing is counter to what we think of as machines. It’s doing something that’s remarkably human.”

    By divining the meaning of cryptically worded questions and finding answers in its general knowledge database, Watson showed an ability to understand natural language, one of the hardest problems for a computer to crack. The demonstration seemed to point to a time when computers would “understand” complex information and converse with people about it, replicating and eventually surpassing most forms of human expertise.
    The biggest challenge for IBM has been to apply this ability to complex bodies of information beyond the narrow confines of the game show and come up with meaningful answers. For some customers, this has turned out to be much harder than expected.
    The University of Texas’s MD Anderson Cancer Center began trying to train the system three years ago to discern patients’ symptoms so that doctors could make better diagnoses and plan treatments.
    “It’s not where I thought it would go. We’re nowhere near the end,” says Lynda Chin, head of innovation at the University of Texas’ medical system. “This is very, very difficult.” Turning a word game-playing computer into an expert on oncology overnight is as unlikely as it sounds, she says.

    Part of the problem lies in digesting real-world information: reading and understanding reams of doctors’ notes that are hard for a computer to ingest and organise. But there is also a deeper epistemological problem. “On Jeopardy! there’s a right answer to the question,” says Ms Chin but, in the
    medical world, there are often just well-informed opinions.
    Mr Kelly denies IBM underestimated how hard challenges like this would be and says a number of medical organisations are on the brink of bringing similar diagnostic systems online.

    Applying the technology
    IBM’s initial plan was to apply Watson to extremely hard problems, announcing in early press releases “moonshot” projects to “end cancer” and accelerate the development of Africa. Some of the promises evaporated almost as soon as the ink on the press releases had dried. For instance, a far-reaching partnership with Citibank to explore using Watson across a wide range of the bank’s activities, quickly came to nothing.
    Since adapting in 2014, IBM now sells some services under the Watson brand. Available through APIs, or programming “hooks” that make them available as individual computing components, they include sentiment analysis — trawling information like a collection of tweets to assess mood — and personality tracking, which measures a person’s online output using 52 different characteristics to come up with a verdict.

    At the back of their minds, most customers still have some ambitious “moonshot” project they hope that the full power of Watson will one day be able to solve, says Mr Kelly; but they are motivated in the short term by making improvements to their business, which he says can still be significant.
    This more pragmatic formula, which puts off solving the really big problems to another day, is starting to pay dividends for IBM. Companies like Australian energy group Woodside are using Watson’s language capabilities as a form of advanced search engine to trawl their internal “knowledge bases”. After feeding more than 20,000 documents from 30 years of projects into the system, the company’s engineers can now use it to draw on past expertise, like calculating the maximum pressure that can be used in a particular pipeline.
    To critics in the AI world, the new, componentised Watson has little to do with the original breakthrough and waters down the technology. “It feels like they’re putting a lot of things under the Watson brand name — but it isn’t Watson,” says Mr Hammond.
    Mr Etzioni goes further, claiming that IBM has done nothing to show that its original Jeopardy!-playing breakthrough can yield results in the real world. “We have no evidence that IBM is able to take that narrow success and replicate it in broader settings,” he says. Of the box of tricks that is now sold under the Watson name, he adds: “I’m not aware of a single, super-exciting app.”

    To IBM, though, such complaints are beside the point. “Everything we brand Watson analytics is very high-end AI,” says Mr Kelly, involving “machine learning and high-speed unstructured data”. Five years after Jeopardy! the system has evolved far beyond its original set of tricks, adding capabilities such as image recognition to expand greatly the range of real-world information it can consume and process.

    Adopting the system
    This argument may not matter much if the Watson brand lives up to its promise. It could be self-fulfilling if a number of early customers adopt the technology and put in the work to train the system to work in their industries, something that would progressively extend its capabilities.

    Another challenge for early users of Watson has been knowing how much trust to put in the answers the system produces. Its probabilistic approach makes it very human-like, says Ms Chin at MD Anderson. Having been trained by experts, it tends to make the kind of judgments that a human would, with the biases that implies.
    In the business world, a brilliant machine that throws out an answer
    to a problem but cannot explain itself will be of little use, says Mr Hammond. “If you walk into a CEO’s office and say we need to shut down three factories and sack people, the first thing the CEO will say is: ‘Why?’” He adds: “Just producing a result isn’t enough.”
    IBM’s attempts to make the system more transparent, for instance by using a visualisation tool called WatsonPaths to give a sense of how it reached a conclusion, have not gone far enough, he adds.
    Mr Kelly says a full audit trail of Watson’s decision-making is embedded in the system, even if it takes a sophisticated user to understand it. “We can go back and figure out what data points Watson connected” to reach its answer, he says.

    He also contrasts IBM with other technology companies like Google and Facebook, which are using AI to enhance their own services or make their advertising systems more effective. IBM is alone in trying to make the technology more transparent to the business world, he argues: “We’re probably the only ones to open up the black box.”
    Even after the frustrations of wrestling with Watson, customers like MD Anderson still believe it is better to be in at the beginning of a new technology.
    “I am still convinced that the capability can be developed to what we thought,” says Ms Chin. Using the technology to put the reasoning capabilities of the world’s oncology experts into the hands of other doctors could be far-reaching: “The way Amazon did for retail and shopping, it will change what care delivery looks like.”
    Ms Chin adds that Watson will not be the only reasoning engine that is deployed in the transformation of healthcare information. Other technologies will be needed to complement it, she says.
    Five years after Watson’s game show gimmick, IBM has finally succeeded in stirring up hopes of an AI revolution in business. Now, it just has to live up to the promises.

    Source: Financial Times

  • Augmented analytics: when AI improves data analytics

    Augmented analytics: when AI improves data analytics

    Augmented analytics: the combination of AI and analytics is the latest innovation in data analytics. For organizations, data analysis has evolved from hiring “unicorn” data scientists – to having smart applications that provide actionable insights for decision-making in just a few clicks, thanks to AI. 

    Augmenting by definition means making something greater in strength or value. Augmented analytics, also known as AI-driven analytics, helps in identifying hidden patterns in large data sets and uncovers trends and actionable insights. It leverages technologies such as Analytics, Machine Learning, and Natural Language Generation to automate data management processes and assist with the hard parts of analytics. 

    According to Gartner, by the end of 2024, 75% of enterprises will operationalize AI, driving a 5x increase in streaming data and analytics infrastructures. The capabilities of AI are poised to augment analytics activities and enable companies to internalize data-driven decision-making while enabling everyone in the organization to easily deal with data. This means AI helps in democratizing data across the enterprise and saves data analysts, data scientists, engineers, and other data professionals from spending time on repetitive manual processes.

    How does AI improve analytics?

    The latest advances in Artificial Intelligence play a significant role in making business processes more efficient and powerful with the help of automation. Analytics, too, is becoming more accessible and automated because of AI. Here are a few ways in which AI is contributing to analytics:

    • With the help of machine learning algorithms, AI systems can automatically analyze data and uncover hidden trends, patterns, and insights that can be used by employees to make better-informed decisions. 
    • AI automates report generation and makes data easy-to-understand by using Natural Language Generation.
    • Using Natural Language Query (NLQ), AI enables everyone in the organization to intuitively find answers and extract insights from data, thereby improving data literacy and freeing time for data scientists.
    • AI helps in streamlining BI by automating data analytics and delivering insights and value faster.

    So, how does it work?

    While traditional BI used rule-based programs to deliver static analytics reports from data, augmented analytics leverages AI techniques such as Machine Learning and Natural Language Generation to automate data analysis and visualization. 

    • Machine Learning learns from data and identifies trends, patterns, and relationships between data points. It can use past instances and experiences to adapt to changes and improvise on the data. 
    • Natural Language Generation uses language to convert the findings from machine learning data into easy-to-decipher insights. Machine Learning derives all the insights, and NLG converts those insights into a human-readable format.

    Augmented analytics can also take in queries from users and generate answers in the form of visuals and text. This entire process is of generating insights from data is automated and makes it easy for non-technical users to easily interpret data and identify insights.

    Augmented analytics for enterprises

    Business Intelligence can help in making improved business decisions and driving better ROI by gathering and processing data. A good BI tool collects important data from internal and external sources and provides actionable insights out of it. Augmented analytics simply improves business intelligence and helps enterprises in the following ways:

    1. Accelerates data preparation

    Data analysts usually spend most of their time in extracting and cleaning their data. Augmented analytics takes away all the painstaking processes that data analysts need to do by automating the ETL (extract, transform and load) data process and providing valuable data that can be useful for analysis.

    1. Automates insight generation

    Once the data is prepared and ready for processing, augmented analytics uses it to automatically derive insights. It uses machine learning algorithms to automate analyses and quickly generate insights, which would take days and months if done by data scientists and analysts. 

    1. Allows querying of data

    Augmented analytics makes it easy for users to ask questions and interact with data. With the help of NLQ and NLG, it takes in queries in the form of natural language, translates it into machine language, and then produces meaningful results and insights in the form of easy-to-understand language. This makes data analytics a two-way conversation wherein businesses can ask questions to their data and get answers in real-time.

    1. Empowers everyone to use analytics products

    The feature of querying data makes it possible for professionals to delve deeper into their data and also enables everyone in the organization to use analytics products. Enterprises no longer require data scientists or professionals with technical expertise to use BI tools to analyze data. This has led to an increase in the user base of BI and analytics tools.

    1. Automates report generation and dissemination

    With augmented analytics, insights can be generated from data at the speed of thought. These insights can further be used to automate report writing, saving a lot of manual efforts in report generation. 

    Augmented analytics in action

    Augmented Analytics can be used to solve various business problems. Some use cases and applications of it include demand forecasting, fraud, and anomaly detection, deriving customer and market insights, performance tracking, and so on. Here are a few examples:

    • Banking and financial institutions use augmented analytics to generate personalized portfolio analysis reports.
    • Retail and FMCG companies use intelligence powered by augmented analytics to track market insights and make informed decisions.
    • Companies in the financial services sector use recommendations and insights mined by augmented analytics to detect and prevent fraud or anomalies.
    • Media and entertainment companies use insights generated from augmented analytics to provide tailored content to their users.
    • Marketing and sales functions across businesses use augmented analytics to extract data from external and internal sources and gain insights into sales, customer trends, and product performance.

    Wrapping up

    The complexity and scale of data being produced and used by businesses across sectors are more than humans alone can handle. Enterprises have started adopting the new AI wave in analytics to tackle data and improve their processes. Augmented analytics is the disruptor, and leveraging it with BI platforms can help businesses to analyze data faster, optimize their operations and make data teams more productive.

    Author: Neerav Parekh

    Source: Dataconomy

  • BERT-SQuAD: Interviewing AI about AI

    BERT-SQuAD: Interviewing AI about AI

    If you’re looking for a data science job, you’ve probably noticed that the field is hyper-competitive. AI can now even generate code in any language. Below, we’ll explore how AI can extract information from paragraphs to answer questions.

    One day you might be competing against AI, if AutoML isn’t that competitor already.

    What is BERT-SQuAD?

    Google BERT and the Stanford Question Answering Dataset.

    BERT is a cutting-edge Natural Language Processing algorithm that can be used for tasks like question answering (which we’ll go into here), sentiment analysis, spam filtering, document clustering, and more. It’s all language!

    “Bidirectionality” refers to the fact that many words change depending on their context, like “let’s hit he club” versus “an idea hit him”, so it’ll consider words on both sides of the keyword.

    “Encoding” just means assigning numbers to characters, or turning an input like “let’s hit the club” into a machine-workable format.

    “Representations” are the general understanding of words you get by looking at many of their encodings in a corpus of text.

    “Transformers” are what you use to get from embeddings to representations. This is the most complex part.

    As mentioned, BERT can be trained to work on basically any kind of language task, so SQuAD refers to the dataset we’re using to train it on a specific language task: Question answering.

    SQuAD is a reading comprehension dataset, containing questions asked by crowdworkers on Wikipedia articles, where the answer to every question is a segment of text from the corresponding passage.

    BERT-SQuAD, then, allows us to answer general questions by fishing out the answer from a body of text. It’s not cooking up answers from scratch, but rather, it understands the context of the text enough to find the specific area of an answer.

    For example, here’s a context paragraph about lasso and ridge regression:

    “You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small / medium sized effect, use ridge regression.

    Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the least square estimates have higher variance. Therefore, it depends on our model objective.”

    Now, we could ask BERT-SQuAD:

    “When is Ridge regression favorable over Lasso regression?”

    And it’ll answer:

    “In presence of correlated variables”

    While I show around 100 words of context here, you could input far more context into BERT-SQuAD, like whole documents, and quickly retrieve answers. An intelligent Ctrl-F, if you will.

    To test the following 7 questions, I used Gradio, a library that lets developers make interfaces out of models. In this case, I used the BERT-SQuAD interface created out of Google Colab.

    I used the contexts from a Kaggle thread as inputs, and modified the questions for simplicities sake.

    Q1: What will happen if you don’t rotate PCA components?

    The effect of PCA will diminish

    Q2. How do you reduce the dimensions of data to reduce computation time?

    We can separate the numerical and categorical variables and remove the correlated variables

    Q3: Why is Naive Bayes “naive” ?

    It assumes that all of the features in a data set are equally important and independent

    Q4: Which algorithm should you use to tackle low bias and high variance?


    Q5: How are kNN and kmeans clustering different?

    kmeans is unsupervised in nature and kNN is supervised in nature

    Q6: When is Ridge regression favorable over Lasso regression?

    In presence of correlated variables

    Q7: What is convex hull?

    Represents the outer boundaries of the two group of data points

    Author: Frederik Bussler

    Source: Towards Data Science


  • Big Data Predictions for 2016

    A roundup of big data and analytics predictions and pontifications from several industry prognosticators.

    At the end of each year, PR folks from different companies in the analytics industry send me predictions from their executives on what the next year holds. This year, I received a total of 60 predictions from a record 17 companies. I can't laundry-list them all, but I can and did put them in a spreadsheet (irony acknowledged) to determine the broad categories many of them fall in. And the bigger of those categories provide a nice structure to discuss many of the predictions in the batch.

    Predictions streaming in
    MapR CEO John Shroeder, whose company just added its own MapR Streams component to its Hadoop distribution, says "Converged Approaches [will] Become Mainstream" in 2016. By "converged," Schroeder is alluding to the simultaneous use of operational and analytical technologies. He explains that "this convergence speeds the 'data to action' cycle for organizations and removes the time lag between analytics and business impact."

    The so-called "Lambda Architecture" focuses on this same combination of transactional and analytical processing, though MapR would likely point out that a "converged" architecture co-locates the technologies and avoids Lambda's approach of tying the separate technologies together.

    Whether integrated or converged, Phu Hoang, the CEO of DataTorrent predicts 2016 will bring an ROI focus to streaming technologies, which he summarizes as "greater enterprise adoption of streaming analytics with quantified results." Hoang explains that "while lots of companies have already accepted that real-time streaming is valuable, we'll see users looking to take it one step further to quantify their streaming use cases."

    Which industries will take charge here? Hoang says "FinTech, AdTech and Telco lead the way in streaming analytics." That makes sense, but I think heavy industry is, and will be, in a leadership position here as well.

    In fact, some in the industry believe that just about everyone will formulate a streaming data strategy next year. One of those is Anand Venugopal of Impetus Technologies, who I spoke with earlier this month. Venugopa, in fact, feels that we are within two years of streaming data becoming looked upon just another data source.

    Internet of predicted things
    It probably won't shock you that the Internet of Things (IoT) was a big theme in this year's round of predictions. Quentin Gallivan, Pentaho's CEO, frames the thoughts nicely with this observation: "Internet of Things is getting real!" Adam Wray, CEO at Basho, quips that "organizations will be seeking database solutions that are optimized for the different types of IoT data." That might sound a bit self-serving, but Wray justifies this by reasoning that this will be driven by the need to "make managing the mix of data types less operationally complex." That sounds fair to me.

    Snehal Antani, CTO at Splunk, predicts that "Industrial IoT will fundamentally disrupt the asset intelligence industry." Suresh Vasudevan, the CEO of Nimble Storage proclaims "in 2016 the IoT invades the datacenter." That may be, but IoT technologies are far from standardized, and that's a barrier to entry for the datacenter. Maybe that's why the folks at DataArt say "the IoT industry will [see] a year of competition, as platforms strive for supremacy." Maybe the data center invasion will come in 2017, then.

    Otto Berkes, CTO at CA Technologies, asserts that "Bitcoin-born Blockchain shows it can be the storage of choice for sensors and IoT." I hardly fancy myself an expert on blockchain technology, so I asked CA for a little more explanation around this one. A gracious reply came back, explaining that "IoT devices using this approach can transact directly and securely with each other...such a peer-to-peer configuration can eliminate potential bottlenecks and vulnerabilities." That helped a bit, and it incidentally shines a light on just how early-stage IoT technology still is, with respect to security and distributed processing efficiencies.

    Growing up
    Though admittedly broad, the category with the most predictions centered on the theme of value and maturity in Big Data products supplanting the fascination with new features and products. Essentially, value and maturity are proxies for the enterprise-readiness of Big Data platforms.

    Pentaho's Gallivan says that "the cool stuff is getting ready for prime time." MapR's Schroeder predicts "Shiny Object Syndrome Gives Way to Increased Focus on Fundamental Value," and qualifies that by saying "...companies will increasingly recognize the attraction of software that results in business impact, rather than focusing on raw big data technologies." In a related item, Schroeder predicts "Markets Experience a Flight to Quality," further stating that "...investors and organizations will turn away from volatile companies that have frequently pivoted in their business models."

    Sean Ma, Trifacta's Director of Product Management, looking at the manageability and tooling side of maturity, predicts that "Increasing the amount of deployments will force vendors to focus their efforts on building and marketing management tools." He adds: "Much of the capabilities in these tools...will need to replicate functionality in analogous tools from the enterprise data warehouse space, specifically in the metadata management and workflow orchestration." That's a pretty bold prediction, and Ma's confidence in it may indicate that Trifacta has something planned in this space. But even if not, he's absolutely right that this functionality is needed in the Big Data world. In terms of manageability, Big Data tooling needs to achieve not just parity with data warehousing and BI tools, but needs to surpass that level.

    The folks at Signals say "Technology is Rising to the Occasion" and explain that "advances in artificial intelligence and an understanding [of] how people work with data is easing the collaboration between humans and machines necessary to find meaning in big data." I'm not sure if that is a prediction, or just wishful thinking, but it certainly is the way things ought to be. With all the advances we've made in analyzing data using machine learning and intelligence, we've left the process of sifting through the output a largely manual process.

    Finally, Mike Maciag, the COO at AltiScale, asserts this forward-looking headline: "Industry standards for Hadoop solidify." Maciag backs up his assertion by pointing to the Open Data Platform initiative (ODPi) and its work to standardize Hadoop distributions across vendors. ODPi was originally anchored by Hortonworks, with numerous other companies, including AltiScale, IBM and Pivotal, jumping on board. The organization is now managed under the auspices of the Linux Foundation.

    Artificial flavor
    Artificial Intelligence (AI) and Machine Learning (ML) figured prominently in this year's predictions as well. Splunk's Antani reasons that "Machine learning will drastically reduce the time spent analyzing and escalating events among organizations." But Lukas Biewald, Founder and CEO of Crowdflower insists that "machines will automate parts of jobs -- not entire jobs." These two predictions are not actually contradictory. I offer both of them, though, to point out that AI can be a tool without being a threat.

    Be that as it may, Biewald also asserts that "AI will significantly change the business models of companies today." He expands on this by saying "legacy companies that aren't very profitable and possess large data sets may become more valuable and attractive acquisition targets than ever." In other words, if companies found gold in their patent portfolios previously, they may find more in their data sets, as other companies acquire them to further their efforts in AI, ML and predictive modeling.

    And more
    These four categories were the biggest among all the predictions but not the only ones, to be sure. Predictions around cloud, self-service, flash storage and the increasing prominence of the Chief Data Officer were in the mix as well. A number of predictions that stood on their own were there too, speaking to issues as far-reaching as salaries for Hadoop admins to open source, open data and container technology.

    What's clear from almost all the predictions, though, is that the market is starting to take basic big data technology as a given, and is looking towards next-generation integration, functionality, intelligence, manageability and stability. This implies that customers will demand certain baseline data and analytics functionality to be part of most technology solutions going forwards. And that's a great sign for everyone involved in Big Data.

    Source: ZDNet


  • Bol.com: machine learning om vraag en aanbod beter bij elkaar te brengen

    0cd4fbcf0a4f81814f388a75109da149ca643f45Een online marktplaats is een concept dat e-commerce in toenemende mate blijft adopteren. Naast consumer-to-consumer marktplaatsen zoals Marktplaats.nl, zijn er uiteraard ook business-to-consumer marktplaatse waarbij een online platform de vraag van consumenten en het aanbod van leveranciers bij elkaar brengt.

    Sommige marktplaatsen hebben geen eigen assortiment: hun aanbod bestaat voor 100 procent uit aangesloten leveranciers, denk bijvoorbeeld aan Alibaba. Bij Amazon bedraagt het aandeel van eigen producten 50 procent. Ook bol.com heeft een eigen marktplaatsen: ’Verkopen via Bol.com’. Deze draagt bij aan miljoenen extra artikelen in het assortiment van Bol.com.

    Bewaken van contentkwaliteit

    Er komt veel kijken bij het managen van zo’n marktplaats. Het doel is duidelijk: ervoor zorgen dat de vraag en het aanbod zo snel mogelijk bij elkaar komen, zodat de klant direct een aantal producten krijgt aangeboden die voor hem relevant zijn. En met miljoenen klanten aan de ene kant en miljoenen producten van duizenden leveranciers aan de andere kant, is dat natuurlijk een hele klus.

    Jens legt uit: “Het begint bij de standaardisatie van informatie aan zowel de vraag- als de aanbodkant. Bijvoorbeeld, als je als leverancier een cd van Tsjaikovski of een bril van Dolce & Gabbana bij bol.com wilt aanbieden, dan zijn er vele schrijfwijzen mogelijk. Voor een verkoopplatform als ‘Verkopen via bol.com’ is de kwaliteit van de data cruciaal. Het in stand houden van de kwaliteit van de content is dus een van de uitdagingen.

    Aan de andere kant van de transactie zijn er natuurlijk klanten van bol.com die ook allerlei variaties van termen, zoals de namen van merken, in het zoekveld intypen. Daarnaast wordt er in toenemende mate gezocht op generieke termen als ‘cadeau voor huwelijk’ of ‘spullen voor een feestje’.

    Vraag en aanbod bij elkaar brengen

    Naarmate het assortiment groter wordt, wat het geval is, en de klanten steeds ‘generieker’ gaan zoeken, is het steeds uitdagender om een match te maken en relevantie hoog te houden. Door het volume van deze ongestructureerde data en het feit dat ze realtime geanalyseerd moeten worden, kun je die match niet met de hand maken. Je moet hiervoor de data slim kunnen inzetten. En dat is een van de activiteiten waar het customer intelligence team van bol.com, een onderdeel van customer centric selling-afdeling, mee bezig is.

    Jens: “De truc is om het gedrag van klanten op de website te vertalen naar contentverbeteringen. Door de woorden (en woordcombinaties) die klanten gebruiken om artikelen te zoeken en producten die uiteindelijk gekocht zijn te analyseren en met elkaar te matchen, kunnen synoniemen voor desbetreffende producten worden gecreëerd. Dankzij deze synoniemen gaat de relevantie van de zoekresultaten omhoog en help je dus de klant om het product sneller te vinden. Bovendien snijdt het mes snijdt aan twee kanten, omdat tegelijkertijd de kwaliteit van de productcatalogus wordt verbeterd. Denk hierbij aan verfijning van verschillende kleurbeschrijvingen (WIT, Wit, witte, white, etc.).

    Algoritmes worden steeds slimmer

    Het bovenstaande proces verloopt nog semi-automatisch (met terugwerkende kracht), maar de ambitie is om het in de toekomst volledig geautomatiseerd plaats te laten vinden. Om dat te kunnen doen worden er op dit moment stap voor stap machinelearningtechnieken geïmplementeerd. Als eerste is er geïnvesteerd in technologieën om grote volumes van ongestructureerde data zeer snel te kunnen verwerken. Bol.com bezit twee eigen datacenters met tientallen clusters.

    “Nu wordt er volop geëxperimenteerd om deze clusters in te zetten voor het verbeteren van het zoekalgoritme, het verrijken van de content en standaardisatie”, geeft Jens aan. “En dat levert uitdagingen op. Immers, als je doorslaat in standaardisatie, dan kom je in een selffulfilling prophecy terecht. Maar gelukkig nemen de algoritmes het beetje bij beetje over en worden ze steeds slimmer. Nu probeert het algoritme de zoekterm zelf aan een product te koppelen en legt het deze aan diverse interne specialisten voor. Concreet geformuleerd: de specialisten krijgen te zien dat ‘de kans 75 procent is dat de klant dit bedoelt’. Die koppeling wordt vervolgens handmatig gevalideerd. De terugkoppeling van deze specialisten over een voorgestelde verbetering levert belangrijke input voor algoritmes om informatie nog beter te kunnen verwerken. Je ziet dat de algoritmes steeds beter hun werk doen.”

    Toch levert dit voor Jens en zijn team een volgende kwestie op: waar leg je de grens waarbij het algoritme zelf de beslissing kan nemen? Is dat bij 75 procent? Of moet alles onder de 95 procent door menselijk inzicht gevalideerd worden?

    Een betere winkel maken voor onze klanten met big data

    Drie jaar geleden was big data een onderwerp waarover voornamelijk in PowerPoint‑slides gesproken werd. Tegenwoordig hebben vele (grotere) e-commercebedrijven een eigen Hadoop-cluster. Het is de volgende stap om met big data de winkel écht beter te maken voor klanten en bij bol.com wordt daar hard aan gewerkt. In 2010 is bij het bedrijf overgestapt van ‘massamediale’ naar ‘persoonlijk relevante’ campagnevoering, waarbij er in toenemende mate gepoogd wordt om op basis van diverse ‘triggers’ een persoonlijke boodschap aan de klant te bieden, real-time.

    Die triggers (zoals bezochte pagina’s of bekeken producten) wegen steeds zwaarder dan historische gegevens (wie is de klant en wat heeft deze in verleden gekocht).

    “Als je inzicht krijgt in relevante triggers en niet‑relevante weglaat”, stelt Jens, “dan kun je de consument beter bedienen door bijvoorbeeld de meest relevante review te tonen, een aanbieding te doen of een selectie vergelijkbare producten te maken. Op deze manier sluit je beter aan bij de klantreis en is de kans steeds groter dat de klant bij je vind wat hij zoekt.”

    En dat doet bol.com door eerst, op basis van het gedrag op de website, maar ook op basis van de beschikbare voorkeuren van de klant, op zoek te gaan naar de relevante triggers. Nadat deze aan de content zijn gekoppeld, zet bol.com A/B‑testen in om de conversie te analyseren om het uiteindelijk wel of niet definitief door te voeren. Immers, elke wijziging moet resulteren in hogere relevantie.

    Er komen uiteraard verschillende technieken bij kijken om ongestructureerde data te kunnen analyseren en hier zijn zowel slimme algoritmes als menselijk inzicht voor nodig. Jens: “Gelukkig zijn bij ons niet alleen de algoritmes zelflerend, maar ook het bedrijf, dus het proces gaat steeds sneller en beter.”


    Outsourcen of alles in-house doen is een strategische beslissing. Bol.com koos voor het laatste. Uiteraard wordt er nog op ad-hocbasis gebruikgemaakt van de kennis uit de markt als dat helpt om processen te versnellen. Data-analisten en data scientists zijn een belangrijk onderdeel van het groeiende customer centric selling team.

    Het verschil spreekt voor zich: data-analisten zijn geschoold in ‘traditionele’ tools als SPSS en SQL en doen analysewerk. Data scientists hebben een grotere conceptuele flexibiliteit en kunnen daarnaast programmeren in onder andere Java, Python en Hive. Uiteraard zijn er doorgroeimogelijkheden voor ambitieuze data-analisten, maar toch wordt het steeds lastiger om data scientists te vinden.

    Hoewel er in de markt keihard gewerkt wordt om het aanbod uit te breiden; hebben we hier vooralsnog met een kleine, selecte groep professionals te maken. Bol.com doet er alles aan om de juiste mensen te werven en op te leiden. Eerst wordt een medewerker met het juiste profiel binnengehaald; denk aan iemand die net is afgestudeerd in artificial intelligence, technische natuurkunde of een andere exacte wetenschap. Vervolgens wordt deze kersverse data scientist onder de vleugels van een van de ervaren experts uit het opleidingsteam van bol.com genomen. Training in computertalen is hier een belangrijk onderdeel van en verder is het vooral learning-by-doing.

    Mens versus machine

    Naarmate de algoritmes steeds slimmer worden en artificial‑intelligencetechnologieën steeds geavanceerder, zou je denken dat het tekort aan data scientists tijdelijk is: de computers nemen het over.

    Dat is volgens Jens niet het geval: “Je zult altijd behoefte blijven houden aan menselijk inzicht. Alleen, omdat de machines steeds meer routinematig en gestandaardiseerd analysewerk overnemen, kun je steeds meer gaan doen. Bijvoorbeeld, niet de top 10.000 zoektermen verwerken, maar allemaal. Feitelijk kun je veel meer de diepte én de breedte in. En dus is de impact van jouw werk op de organisatie vele malen groter. Het resultaat? De klant wordt beter geholpen en hij bespaart tijd omdat hij steeds relevantere informatie krijgt en daarom meer engaged is. En brengt ons ook steeds verder in onze ambitie om onze klanten de beste winkel te bieden die er bestaat.”

    Klik hiervoor het hele rapport.

    Source: Marketingfacts

  • Chinese hospitals turn to AI assistance for treatment corona virus

    Chinese hospitals turn to AI assistance for treatment corona virus

    The deadly coronavirus outbreak, which has pushed the Chinese health industry into overdrive, has also prompted the country’s hospitals to more quickly adopt robots as medical assistants.

    Telepresence bots that allow remote video communication, patient health monitoring and safe delivery of medical goods are growing in number on hospital floors in urban China. They’re now acting as a safe go-between that helps curb the spread of the coronavirus.

    Keenon Robotics Co., a Shanghai-based company, deployed 16 robots of a model nicknamed 'little peanut' to a hospital in Hangzhou after a group of Wuhan travelers to Singapore were held in quarantine. Siasun Robot and Automation Co. donated seven medical robots and 14 catering service robots to the Shenyang Red Cross to help hospitals combat the virus on Wednesday, according to a media release on the company’s website.

    Keenon and Siasun didn’t reply immediately to requests for comment. JD.com Inc. is testing the use of autonomous delivery robots in Wuhan, the company said in a statement. Local media has also reported robots being used in hospitals in the city as well as in Guangzhou, Jiangxi, Chengdu, Beijing, Shanghai, and Tianjin.

    The rapid spread of the coronavirus has left provincial hospitals straining to cope and helped accelerate the embrace of artificial intelligence as one solution, turning the gadgets into medical assistants. These bots join China’s tech-heavy response to the coronavirus outbreak, which also includes airborne drones and work-from-home apps. The jury remains out on how effective these coping tactics will be.

    China’s rapid buildout of fifth-generation wireless networking in areas around urban hospitals has also seen a rise in 5G-powered medical robots, equipped with cameras that allow remote video communication and patient monitoring. These are in contrast to robots like little peanut, whose primary function is to make indoor deliveries.

    'The technology of robots used in Chinese hospitals isn’t high, but what this virus is also highlighting, and it could be the next stage of Chinese robots, is the use of medical robot deployment', said Bloomberg Intelligence analyst Nikkie Lu.

    China Mobile Ltd. donated one 5G robot each to both Wuhan Union Hospital and Tongji Tianyou Hospital this week, according to a report by ThePaper.cn. Riding the 5G network, these assistant bots carry a disinfectant tank on board and will be used to safely clean hospital areas along a predetermined route, reducing the risk to medical personnel.

    Zhejiang People’s Hospital used a 5G robot to diagnose its first coronavirus patient on Sunday, according to a report by the Hangzhou news center run by the State Council Information Office. Beijing Jishuitan Hospital performed remote surgery on a patient in Shandong province via China Telecom Corp.’s 5G network last June.

    While it may take patients a moment or two to get over the shock of being helped by a robot rather than a medical professional, bots have already permeated a growing number of sectors in Chinese society including nursing homes, restaurants, warehouses, banks and over 200 kindergartens.

    Financial services company Huachuang Securities Co. believes even more robots are in China’s immediate future. Pointing to National Bureau of Statistics data suggesting that domestic production of industrial robots increased by 15.3% in the month of December, they predict similarly fast growth in the current quarter, according to a report published by Finance Sina.

    The increased quantity of robots deployed to combat the coronavirus has helped accelerate China’s path to the goal it had already set for itself. The country wants to become one of the world’s top 10 most intensively automated nations by the end of this year.

    Source: Information-management

  • Connection between human and artificial intelligence moving closer to realization

    Connection between human and artificial intelligence moving closer to realization

    What was once the stuff of science fiction is now science fact, and that’s a good thing. It is heartening to hear how personal augmentation with robotics is changing people’s lives.

    A paralyzed man in France is now using a brain-controlled robotic suit to walk. The connection between brain and machine is now possible through ultra-high-speed computing power combined with deep engineering to enable a highly connected device.

    Artificial Intelligence is in everyday life

    We are seeing the rise of artificial intelligence in every walk of life, moving beyond the black box and being part of human life in its everyday settings.

    Another example is the advent of the digital umpire in major league baseball. The angry disputes of players and fans challenging the umpire and holding of breaths for the replay may become a thing of the past with deep precision of instant decisions from an unbiased, non-impassioned, and non-human ump.

    Augmented reality and virtual reality are also becoming a must for business. They have moved into every aspect from medicine to mining across design, manufacturing, logistics and service and are a familiar business tool delivering multi-dimensional immediate insights that were previously hard to find.

    For example, people are using digital twin technology to see deeply into equipment, wherever it is, and diagnose and fix problems, or to take a global view of business operations through a digital board room.

    What’s changed? Fail fast, succeed sooner!

    Every technology takes time to find its groove as early adopters experiment and find mass uses for it. There is a usual cycle of experimentation, fast failure is a necessary part of discovering best applications for technology. We all saw Google Goggles fail to find market traction, but in its current generation, it is an invaluable addition to provide valuable information to people in the field, repairing equipment and needed expertise on site.

    Speed, Intelligence, and Connection make it happen

    The Six Million Dollar Man for business should be able to connect to the brain, providing instant feedback to the operations of the business based on actual experience in the field. It has to operate in the speed of a heartbeat and use predictive technologies. (Nerd Alert: Speaking of the Six Million Dollar Man, it should come as no surprise that the titular character has been upgraded to 'The Six Billion Dollar Man' in the upcoming movie starring Mark Walberg.)

    Think of all the stuff our brain is doing even as we walk, balancing our bodies as we are in motion, making adjustments as we turn our head or land our feet. Predicting where our body will be so that the weight of our limbs can be adjusted, the brain needs instant feedback from all our senses to make decisions in realtime that appear to be 'natural'.

    Business, too, needs systems that are deeply connected, predictive, and high speed, balancing the desire for movement to optimizing the operations to make it happen. That requires a new architecture that is lightning fast using memory rather than disk processing, using artificial intelligence to optimize decisions that are too fast to make on our own, to keep a pulse on the business and to predict with machine learning.

    The fundamental architecture is different. It has to work together and be complete; it is no good having leg movements from one vendor and head movements from another. In a world where speed and sensing has to cover the whole body, it needs to work in unison.

    We can’t wait to see how these new architectures will change the world.

    Author: David Sweetman

    Source: Dataversity

  • Data analytics: From studying the past to forecasting the future

    Data analytics: From studying the past to forecasting the future

    To compete in today's competitive market place, it is critical that executives have access to an accurate and holistic view of their business. The key element to sifting through a massive amount of data to gain this level of transparency is a robust analytics solution. As technology is constantly evolving, so too are data analytics solutions. 

    In this blog, three types of data analytics and the emerging role of artificial intelligence (AI) in processing the data are discussed:

    Descriptive analytics

    As the name suggests, descriptive analytics describe what happened in the past. This is accomplished by taking raw historical, whether from five minutes or five years ago, and presenting an easy-to-understand, accurate view of past patterns or behaviors. By understanding what happened, we can better understand how it might influence the future. Many businesses use descriptive analytics to understand customer buying patterns, sales year-over-year, historical cost-to-serve, supply chain patterns, financials, and much more.

    Predictive analytics

    This is the ability to accurately forecast or predict what could happen moving forward. Understanding the likelihood of future outcomes enables the company to better prepare based on probabilities. This is accomplished by taking the historical data from your various silos such as CRM, ERP, and POS, and combining it into one single version of the truth. This enables users to identify trends in sales, forecast demands on the supply chain, purchasing and inventory level based on a number of variables. 

    Prescriptive Analytics

    This solution is the newest evolution in data analytics. It takes previous iterations to the next level by revealing possible outcomes and prescribing courses of actions. In addition, this solution will also show why it will happen. Prescriptive analytics answers the question: What should we do? Although this is a relatively new form of analytics, larger retail companies are successfully using it to optimize customer experience, production, purchasing and inventory in the supply chain to make sure the right products are being delivered at the right time. In the stock market, prescriptive analytics can recommend where to buy or sell to optimize your profit.

    All three categories of analytics work together to provide the guidance and intelligence to optimize business performance.

    Where AI fits in

    As technology continues to advance, AI will become a game-changer by making analytics substantially more powerful. A decade ago, analytics solutions only provided descriptive analytics.  As the amount of data generated increased, solutions started to develop predictive analytics. As AI evolves, data analytics solutions are also changing and becoming more sophisticated. BI software vendors are currently posturing to be the first to market with an AI offering to enhance prescriptive analytics. 

    AI can help sales-based organizations by providing specific recommendations that sales representatives can act on immediately. Insight into customer buying patterns will allow prescriptive analytics to suggest products to bundle which ultimately leads to an increase in the size of an order, reduce delivery costs and number of invoices.

    Predictive ordering has enabled companies to send products you need before you order them. For example, some toothbrush or razor companies will send replacement heads in this way. They predict when the heads will begin to fail and order the replacement for you. 

    Improving data analytics for your business

    If you are considering enhancing your data analytics capability and adding artificial intelligence, we encourage you to seek out a software vendor that offers you industry-matched data analytics that is easy and intuitive for everyone to use. This means dashboards, scorecards, alerts developed with the standard KPIs for your industry, pre-built.

    Collaborating to customize the software to fit your business and augmenting with newer predictive analytics and machine learning-based AI happens next.

    Source: Phocas Software

  • DataRobot actief in AI-initiatief World Economic Forum

    DataRobot actief in AI-initiatief World Economic Forum

    Voor rechtvaardigheid, verantwoording en transparantie van Artificial Intelligence

    DataRobot, snelgroeiende leverancier van enterprise AI, heeft zich aangesloten bij een nieuw initiatief van het World Economic Forum: 'Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning'. Met dit initiatief wil het WEF de maatschappelijke impact van AI en machine learning vergroten, met waarborging van gelijkheid, privacy, transparantie, verantwoordingsplicht en sociale impact.
    Het World Economic Forum brengt in dit initiatief experts samen uit de publieke en private sector om beleidskaders te ontwikkelen en te testen die de ontwikkelingen van AI en machine learning moeten versnellen en de risico's ervan moeten verkleinen. Zo houdt het initiatief zich bezig met projecten als kinderbescherming, een moderne AI-regulator en beleid rondom gezichtsherkenningstechnologie. Binnen het initiatief gaat DataRobot nauw samenwerken met onderzoekers, organisaties en andere belanghebbenden om nieuwe inzichten te creëren over hoe AI kan en moet worden gebruikt ter verbetering van de samenleving terwijl ethiek en rechtvaardigheid gewaarborgd blijven. 
    De samenwerking met het World Economic Forum borduurt voort op DataRobots jarenlange inspanningen op het gebied van vertrouwen, monitoring en ethiek. In 2019 vormde het een Trusted AI-team, dat wordt geleid door Ted Kwartler. Dit team richt zich op het bouwen en leveren van betrouwbare en ethische AI-systemen, en het begeleiden van klanten op dit gebied. Tot deze klanten behoren enkele van de grootste banken ter wereld, zorgverzekeraars en diverse organisaties binnen de overheid.
    “Naarmate machine learning en AI zich blijven ontwikkelen en de acceptatie groeit, is samenwerking tussen organisaties essentieel om verantwoording, transparantie, privacy en onpartijdigheid te garanderen”, aldus Kay Firth-Butterfield, hoofd AI & Machine Learning en lid van het uitvoerend comité van het World Economic Forum. “Dit initiatief brengt experts samen die niet alleen op zoek zijn naar de positieve impact die AI op de samenleving in het algemeen kan hebben, maar die ook het vertrouwen in de organisaties en individuen die gebruikmaken van de technologie willen waarborgen.”
    “We bevinden ons op een cruciaal technologisch moment in de geschiedenis om verandering te stimuleren en een meer rechtvaardige, AI-gedreven toekomst vorm te geven ten behoeve van iedereen. Als leider op het gebied van enterprise AI en machine learning is het onze verantwoordelijkheid om een ​​actieve rol te spelen om ervoor te zorgen dat AI wordt gebruikt voor de verbetering van de samenleving”, zegt Ted Kwartler, VP Trusted AI bij DataRobot. “We zijn verheugd om onze krachten te bundelen met het World Economic Forum om de middelen te mobiliseren die nodig zijn om technologie duurzamer en inclusief te maken. We zien er naar uit onze inzichten met de branche te delen en samen te werken om een ​​meer ethisch, transparant en rechtvaardig AI-ecosysteem te bouwen."
    Bron: DataRobot
  • DataRobot benoemt Dan Wright tot Chief Executive Officer

    DataRobot benoemt Dan Wright tot Chief Executive Officer

    Leider in enterprise AI streeft naar verdere groeiversnelling na recordjaar 

    DataRobot, leider in enterprise AI, heeft Dan Wright benoemd tot nieuwe Chief Executive Officer om de groei en adoptie van zijn enterprise AI-platform verder te versnellen. Wright was sinds januari 2020 President en Chief Operating Officer bij DataRobot en volgt medeoprichter Jeremy Achin op.
    "In het afgelopen jaar hebben we voortgebouwd op de basis die onze oprichter heeft gelegd en de operationele capaciteit vergroot, ons managementteam uitgebreid en meer klanten dan ooit geholpen om meerwaarde uit hun data te halen met AI,” aldus Wright. "We willen gebruik maken van het momentum en onze ambitieuze groeiplannen verder uitvoeren. Ik voel me vereerd dat ik de kans krijg om DataRobot naar het volgende hoofdstuk te leiden."
    Voordat hij bij DataRobot startte, was Wright COO bij AppDynamics, leider in Application Performance Management (APM). Onder zijn leiding was het bedrijf in staat de omzet 100x te verhogen en vestigde het zich als de grootste en snelgroeiende APM-leverancier. AppDynamics werd in 2017 door Cisco overgenomen voor 3,7 miljard USD, twee dagen voor de geplande beursintroductie en zette de agressieve groei als onderdeel van Cisco voort.
    In december 2020 kondigde DataRobot aan dat het 320 miljoen USD had opgehaald in een pre-IPO-financieringsronde onder leiding van Altimeter Capital. Door de financiering, mogelijk gemaakt door nieuwe en bestaande investeerders, waaronder T.Rowe Price, fondsen en rekeningen beheerd door BlackRock, Tiger Global, Silver Lake Waterman, B Capital Group, Glynn Capital, ClearBridge, NEA en Sapphire Ventures, wordt het bedrijf gewaardeerd op meer dan 2,8 miljard USD.
    DataRobot verwacht dat een belangrijk deel van de geambieerde groeiversnelling door tech-savvy en early adopters van technologie uit landen als Nederland gerealiseerd kan worden. Recentelijk opende DataRobot daarom een kantoor in Amsterdam. De Gemeente Den Haag en de Nierstichting tekenden als eersten in Nederland voor het platform van DataRobot. 
    "Ik heb meer dan een jaar geleden veel energie gestoken in het werven van Dan voor DataRobot met als doel hem uiteindelijk de leiding over het bedrijf te laten overnemen. Waar nodig blijf ik betrokken om Dan en de rest van het DataRobot-team te ondersteunen om een iconisch bedrijf op te bouwen,” zegt Jeremy Achin, mede-oprichter van DataRobot.
    "Sinds ik betrokken ben geweest bij het helpen bestrijden van Covid-19, heb ik een beter begrip gekregen van de problemen van ons land en de bedreigingen die er zijn voor de veiligheid. Het helpen voorkomen van toekomstige pandemieën en het leveren van een grote bijdrage aan alle aspecten van nationale veiligheid, is mijn echte passie geworden”, licht Achin toe. “Ik wil mijn hart volgen en meer tijd hieraan besteden. Ik wil alle geweldige mensen bedanken die hebben bijgedragen aan het opbouwen van DataRobot tot wat het nu is - het bedrijf zou niets zijn zonder hun bloed, zweet en tranen."

    Over DataRobot

    DataRobot is de marktleider op het gebied van enterprise AI en levert wereldwijd AI-technologie en ondersteunende diensten waarmee organisaties met slimme inzichten een voorsprong krijgen op hun concurrentie. Het enterprise AI-platform van DataRobot democratiseert data science met end-to-end automatisering voor het bouwen, implementeren en beheren van machine learning-modellen. Het platform creëert meerwaarde door AI op schaal te leveren en continu de prestaties te optimaliseren. De bewezen combinatie van geavanceerde software en AI-implementatie, training en ondersteunende diensten stelt elke organisatie in staat - ongeacht de grootte of branche - om betere resultaten te behalen met AI.
    Bron: DataRobot
  • Dealing with data preparation: best practices - Part 1

    Dealing with data preparation: best practices - Part 1

    IBM is reporting that data quality challenges are a top reason why organizations are reassessing (or ending) artificial intelligence (AI) and business intelligence (BI) projects.

    Arvind Krishna, IBM’s senior vice president of cloud and cognitive software, stated in a recent interview with the Wall Street Journal that 'about 80% of the work with an AI project is collecting and preparing data. Some companies aren’t prepared for the cost and work associated with that going in. And you say: ‘Hey, wait a moment, where’s the AI? I’m not getting the benefit.’ And you kind of bail on it'.

    Many businesses are not prepared for the cost and effort associated with data preparation (DP) when starting AI and BI projects. To compound matters, hundreds of data and record types and billions of records are often involved in a project’s DP effort.

    However, data analytics projects are increasingly imperative to organizational success in the digital economy, hence the need for DP solutions.

    What is AI/BI data preparation?

    Gartner defines data preparation as 'an iterative and agile process for exploring, combining, cleaning, and transforming raw data into curated datasets for data integration, data science, data discovery, and analytics/business intelligence (BI) use cases'. 

    A 2019 International Data Corporation (IDC) study reports that data workers spend a remarkable time each week on data-related activities: 33% on data preparation compared to 32 % on analytics (and, sadly, just 13% on data science). The top challenge cited by more than 30% of all data workers in this study was that 'too much time is spent on data preparation'.

    The variety of data sources, the multiplicity of data types, the enormity of data volumes, and the numerous uses for data analytics and business intelligence, all result in multiple data sources and complexity for each project. Consequently, today’s data workers often use numerous tools for DP success.

    Capabilities needed in data preparation tools

    Evidence in the Gartner Research report Market Guide for Data Preparation Tools shows that data preparation time and reporting of information discovered during DP can be reduced by more than half when DP tools are implemented.

    In the same research report, Gartner lists details of vendors and DP tools. The analyst firm predicts that the market for DP solutions will reach $1 billion this year, with nearly a third (30%) of IT organizations employing some type of self-service data preparation tool set.

    Another Gartner Research Circle Survey on data and analytics trends revealed that over half (54%) of respondents want and need to automate their data preparation and cleansing tasks during the next 12 to 24 months.

    To accelerate data understandings and improve trust, data preparation tools should have certain key capabilities, including the ability to:

    • Extract and profile data. Typically, a data prep tool uses a visual environment that enables users to extract interactively, search, sample, and prepare data assets.
    • Create and manage data catalogs and metadata. Tools should be able to create and search metadata as well as track data sources, data transformations, and user activity against each data source. It should also keep track of data source attributes, data lineage, relationships, and APIs. All of this enables access to a metadata catalog for data auditing, analytics/BI, data science, and other operational use cases.
    • Support basic data quality and governance features. Tools must be able to integrate with other tools that support data governance/stewardship and data quality criteria.

    Keep an eye out for part 2 of this article, where ake a deeper dive into best practices for data preparation.

    Author: Wayne Yaddow

    Source: TDWI

  • Dealing with data preparation: best practices - Part 2

    Dealing with data preparation: best practices - Part 2

    If you haven't read yesterday's part 1 of this article, be sure to check it out before reading this article.

    Getting started with data preparation: best practices

    The challenge is getting good at DP. As a recent report by business intelligence pioneer Howard Dresner found, 64% of respondents constantly or frequently perform end-user DP, but only 12% reported they were very effective. Nearly 40% of data professionals spend half of their time prepping data rather than analyzing it.

    Following are a few of the practices that help assure optimal DP for your AI and BI projects. Many more can be found from data preparation service and product suppliers.

    Best practice 1: Decide which data sources are needed to meet AI and BI requirements

    Take these three general steps to data discovery:

    1. Identify the data needed to meet required business tasks.
    2. Identify potential internal and external sources of that data (and include its owners).
    3. Assure that each source will be available according to required frequencies.

    Best practice 2: Identify tools for data analysis and preparation

    It will be necessary to load data sources into DP tools so the data can be analyzed and manipulated. It’s important to get the data into an environment where it can be closely examined and readied for the next steps.

    Best practice 3: Profile data for potential and selected source data

    This is a vital (but often discounted) step in DP. A project must analyze source data before it can be properly prepared for downstream consumption. Beyond simple visual examination, you need to profile data, detect outliers, and find null values (and other unwanted data) among sources.

    The primary purpose of this profiling analysis is to decide which data sources are even worth including in your project. As data warehouse guru Ralph Kimball writes in his book, The Data Warehouse Toolkit , 'Early disqualification of a data source is a responsible step that can earn you respect from the rest of the team'.

    Best practice 4: Cleansing and screening source data

    Based on your knowledge of the end business analytics goal, experiment with different data cleansing strategies that will get the relevant data into a usable format. Start with a small, statistically-valid sample to iteratively experiment with different data prep strategies, refine your record filters, and discuss the results with business stakeholders.

    When discovering what seems to be a good DP approach, take time to rethink the subset of data you really need to meet the business objective. Running your data prep rules on the entire data set will be very time consuming, so think critically with business stakeholders about which entities and attributes you do and don’t need and which records you can safely filter out.

    Final thoughts

    Proper and thorough data preparation, conducted from the start of an AI/BI project, leads to faster, more efficient AI and BI down the line. DP steps and processes outlined here apply to whatever technical setup you are using, and they will get you better results.

    Note that DP is not a 'do once and forget' task. Data is constantly generated from multiple sources that may change over time, and the context of your business decisions will certainly change over time. Partnering with data preparation solution providers is an important consideration for the long-term capability of your DP infrastructure.

    Author: Wayne Yaddow

    Source: TDWI

  • Digitale technologieën leveren Europees bedrijfsleven komende twee jaar 545 miljard euro op

    925609982sEuropese bedrijven kunnen dankzij het toepassen van digitale tools en technologieën een omzetstijging van 545 miljard euro behalen in de komende twee jaar. Voor Nederlandse bedrijven ligt dit bedrag op 23,5 miljard euro. Dat blijkt uit een onderzoek van Cognizant in samenwerking met Roubini Global Economics onder ruim 800 Europese bedrijven.
    Het onderzoek The Work Ahead – Europe’s Digital Imperative maakt onderdeel uit van een wereldwijd onderzoek waarin het veranderende karakter van werk in het digitale tijdperk wordt onderzocht. De resultaten tonen aan dat organisaties die het meest proactief zijn in het dichter bij elkaar brengen van de fysieke en virtuele wereld, de grootste kans hebben om meer omzet te behalen.
    Omzetpotentieel benutten
    Leidinggevenden geven aan dat technologieën als Artificial Intelligence (AI), Big Data en blockchain een bron kunnen zijn voor nieuwe businessmodellen en inkomststromen, veranderende klantrelaties en lagere kosten. Sterker nog, de ondervraagden verwachten dat digitale technologieën een positief effect van 8,4 procent zullen hebben op de omzet tussen nu en 2018.
    Digitalisering kan voor zowel kostenefficiëntie als omzetstijging zorgen. Door bijvoorbeeld intelligent process automation (IPA) toe te passen – waarbij software-robots routinetaken overnemen – kunnen bedrijven kosten besparen in de middle en backoffice. Uit de analyse blijkt dat de impact van digitale transformatie op omzet en kostenbesparing in de onderzochte industrieën (retail, financiële diensten, verzekeringen1, maakindustrie en life sciences) uitkomt op 876 miljoen euro in 2018.
    Nog steeds achterblijvers op digitaal gebied
    Europese executives verwachten dat een digitale economie gestimuleerd zal worden door een combinatie van data, algoritmes, software-robots en connected devices. Gevraagd naar welke technologie de grootste invloed zal hebben op het werk in 2020, komt Big Data als winnaar naar voren. Maar liefst 99 procent van de respondenten noemt deze technologie. Opvallend is dat AI vlak daarna met 97 procent op een tweede plek eindigt; respondenten beschouwen AI als meer dan een hype. Sterker nog, de verwachting is dat AI een centrale plek zal innemen in het toekomstige werk in Europa.
    Aan de andere kant kunnen late adopters een gezamenlijk verlies van 761 miljard euro verwachten in 2018, zo blijkt uit het onderzoek.
    Een derde van de ondervraagde managers geeft aan dat hun werkgever in hun ogen niet beschikt over de kennis en kwaliteiten om de juiste digitale strategie in te voeren of zelfs geen idee heeft van wat er gedaan moet worden. 30 procent van de ondervraagden is van mening dat hun leidinggevenden te weinig investeren in nieuwe technologieën, terwijl 29 procent terughoudendheid ondervindt in het toepassen van nieuwe manieren van werken.
    De belangrijkste obstakels voor bedrijven om de overstap te maken naar digitaal zijn angst voor beveiligings-issues (24%), budgetbeperkingen (21%) en een gebrek aan talent (14%).
    Euan Davis, European Head of the Centre for the Future of Work bij Cognizant, licht toe: “Om de noodzakelijke stap te kunnen maken naar digitaal, moet het management proactief zijn en hun organisatie voorbereiden op toekomstig werk. Langzame innovatierondes en onwil om te experimenteren zijn de doodsteek voor organisaties om digitale mogelijkheden goed te kunnen benutten. Het beheren van de digitale economie is een absolute noodzaak voor organisaties. Bedrijven die geen prioriteit geven aan het verdiepen, verbreden, versterken of verbeteren van hun digitale voetafdruk, spelen bij voorbaat een verloren wedstrijd.”
    Over het onderzoek
    Uitkomsten zijn gebaseerd op een wereldwijd onderzoek onder 2.000 executives in verschillende industrieën, 250 middenmanagers verantwoordelijk voor andere werknemers, 150 MBA-studenten van grote universiteiten wereldwijd en 50 futuristen (journalisten, academici en auteurs). Het onderzoek onder executives en managers is in 18 landen uitgevoerd in het Engels, Arabisch, Frans, Duits, Japans en Chinees. Executives zijn daarbij telefonisch geïnterviewd, managers via een online vragenlijst. De MBA-studenten en futuristen zijn in het Engels ondervraagd via telefonische interviews (MBA studenten in 15 landen, futuristen in 10 landen). The Work Ahead – Europe’s Digital Imperative bevat de 800 reacties van het Europese onderzoek onder executives en managers. Meer details zijn te vinden in Work Ahead: Insights to Master the Digital Economy.
    Source: emerce.nl, 28 november 2016
  • Exploring the risks of artificial intelligence

    shutterstock 117756049“Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.”

    These words, articulated by Neil Armstrong at a speech to a joint session of Congress in 1969, fit squarely into most every decade since the turn of the century, and it seems to safe to posit that the rate of change in technology has accelerated to an exponential degree in the last two decades, especially in the areas of artificial intelligence and machine learning.

    Artificial intelligence is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation. This reaction alone is predictable, but can we really predict the associated risks involved?

    It seems we’re all trying to get a grip on potential reality, but information overload (yet another side affect that we’re struggling to deal with in our digital world) can ironically make constructing an informed opinion more challenging than ever. In the search for some semblance of truth, it can help to turn to those in the trenches.

    In my continued interview with over 30 artificial intelligence researchers, I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years.

    Some results from the survey, shown in the graphic below, included 33 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).

    Two “greatest” risks bubbled to the top of the response pool (and the majority are not in the autonomous robots’ camp, though a few do fall into this one). According to this particular set of minds, the most pressing short- and long-term risks is the financial and economic harm that may be wrought, as well as mismanagement of AI by human beings.

    Dr. Joscha Bach of the MIT Media Lab and Harvard Program for Evolutionary Dynamics summed up the larger picture this way:

    “The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.”

    Essentially, the introduction of AI may act as a catalyst that exposes and speeds up the imperfections already present in our society. Without a conscious and collaborative plan to move forward, we expose society to a range of risks, from bigger gaps in wealth distribution to negative environmental effects.

    Leaps in AI are already being made in the area of workplace automation and machine learning capabilities are quickly extending to our energy and other enterprise applications, including mobile and automotive. The next industrial revolution may be the last one that humans usher in by their own direct doing, with AI as a future collaborator and – dare we say – a potential leader.

    Some researchers believe it’s a matter of when and not if. In Dr. Nils Nilsson’s words, a professor emeritus at Stanford University, “Machines will be singing the song, ‘Anything you can do, I can do better; I can do anything better than you’.”

    In respect to the drastic changes that lie ahead for the employment market due to increasingly autonomous systems, Dr. Helgi Helgason says, “it’s more of a certainty than a risk and we should already be factoring this into education policies.”

    Talks at the World Economic Forum Annual Meeting in Switzerland this past January, where the topic of the economic disruption brought about by AI was clearly a main course, indicate that global leaders are starting to plan how to integrate these technologies and adapt our world economies accordingly – but this is a tall order with many cooks in the kitchen.

    Another commonly expressed risk over the next two decades is the general mismanagement of AI. It’s no secret that those in the business of AI have concerns, as evidenced by the $1 billion investment made by some of Silicon Valley’s top tech gurus to support OpenAI, a non-profit research group with a focus on exploring the positive human impact of AI technologies.

    “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” is the parallel message posted on OpenAI’s launch page from December 2015. How we approach the development and management of AI has far-reaching consequences, and shapes future society’s moral and ethical paradigm.

    Philippe Pasquier, an associate professor at Simon Fraser University, said “As we deploy more and give more responsibilities to artificial agents, risks of malfunction that have negative consequences are increasing,” though he likewise states that he does not believe AI poses a high risk to society on its own.

    With great responsibility comes great power, and how we monitor this power is of major concern.

    Dr. Pei Wang of Temple University sees major risk in “neglecting the limitations and restrictions of hot techniques like deep learning and reinforcement learning. It can happen in many domains.” Dr. Peter Voss, founder of SmartAction, expressed similar sentiments, stating that he most fears “ignorant humans subverting the power and intelligence of AI.”

    Thinking about the risks associated with emerging AI technology is hard work, engineering potential solutions and safeguards is harder work, and collaborating globally on implementation and monitoring of initiatives is the hardest work of all. But considering all that’s at stake, I would place all my bets on the table and argue that the effort is worth the risk many times over.

    Source: Tech Crunch

  • Four important drivers of data science developments

    Four important drivers of data science developments

    According to the Gartner Group, digital business reached a tipping point last year, with 49% of CIOs reporting that their enterprises have already changed their business models or are in the process of doing so. When Gartner asked CIOs and IT leaders which technologies they expect to be most disruptive, artificial intelligence (AI) was the top-mentioned technology.

    AI and ML are having a profound impact on enterprise digital transformation becoming crucial as a competitive advantage and even for survival. As the field grows, four trends emerge, shaping data science in the next five years:

    Accelerate the full data science life-cycle

    The pressure to grow ROI from AI and ML initiatives has pushed demand for new innovative solutions that accelerate AI and data science. Although data science processes are iterative and highly manual, more than 40% of data science tasks are expected to be automated by 2020, according to Gartner, resulting in increased productivity and broader usage of data across the enterprise.

    Recently, automated machine learning (AutoML) has become one of the fastest-growing technologies for data science. Machine learning, however,  typically accounts for only 10-20% of the entire data science process. Real pains exist before the machine learning stage with data and feature engineering.  The new concept of data science automation goes beyond machine learning automation, including data preparation, feature engineering, machine learning, and the production of full data science pipelines. With data science automation, enterprises can genuinely accelerate AI and ML initiatives.

    Leverage existing resources for democratization

    Despite substantial investments in data science across many industries, the scarcity of data science skills and resources often limits the advancement of AI and ML projects in organizations. The shortage of data scientists has created a challenge for anyone implementing AI and ML initiatives, forcing a closer look at how to build and leverage data science resources.

    Other than the need for highly specialized technical skills and mathematical aptitude, data scientists must also couple these skills with domain/industry knowledge that is relevant to a specific business area. Domain knowledge is required for problem definition and result validation and is a crucial enabler to deliver business value from data science. Relying on 'data science unicorns' that have all these skill sets is neither realistic nor scalable.

    Enterprises are focusing on repurposing existing resources as 'citizen' data scientists. The rise of AutoML and data science automation can unlock data science to a broader user base and allow the practice to scale. By empowering citizen data scientists allowing them to execute standard use cases, skilled data scientists can focus on high-impact, technically-challenging projects to produce higher values.

    Augment insights for greater transparency

    As more organizations are adopting data science in their business process, relying on AI-derived recommendations that lack transparency is becoming problematic. Increased regulatory oversight like the GDPR has exacerbated the problem. Transparent insights make AI models more 'oversight' friendly and have the added benefit of being far more actionable.

    White-box AI models help organizations maintain accountability in data-driven decisions and allow them to live within the boundaries of regulations. The challenge is the need for high-quality and transparent inputs (aka 'features'), often requiring multiple manual iterations to achieve the needed transparency. Data science automation allows data scientists to explore millions of hypotheses and augments their ability to discover transparent and predictive features as business insights.

    Operationalize data science in business

    Although ML models are often tiny pieces of code, when models are finally deemed ready for production, deploying them can be complicated and problematic. For example, since data scientists are not software engineers, the quality of their code may not be production-ready. Data scientists often validate the models with down-sampled datasets in labs environments and models may not be scalable enough for production-scale datasets. Also, the performance of deployed models decreases as data invariably changes, making model maintenance pivotal to extract business value from AI and ML models continuously. Data and feature pipelines are much bigger and more complex than ML models themselves, and operationalizing data and feature pipelines is even more complicated.  One of the promising approaches is to leverage concepts from continuous deployment through APIs. Data science automation can generate APIs to execute the full data science pipeline, accelerating deployments while also providing an ongoing connection to development systems to accelerate the optimization and maintenance of models.

    Data science is at the heart of AI and ML. While the promise of AI is real, the problems associated with data science are also real. Through better planning, closer cooperation with line of business and by automating the more tedious and repetitive parts of the process, data scientists can finally begin to focus on what to solve, rather than how to solve.

    Author: Daniel Gutierrez

    Source: Insidebigdata

  • Gaining control of big data with the help of NVMe

    Gaining control of big data with the help of NVMe

    Every day there is an unfathomable amount of data, nearly 2.5 quintillion bytes, being generated all around us. Part of the data being created we see every day, such as pictures and videos on our phones, social media posts, banking and other apps.

    In addition to this, there is data being generated behind the scenes by ubiquitous sensors and algorithms, whether that’s to process quicker transactions, gain real-time insights, crunch big data sets or to simply meet customer expectations. Traditional storage architectures are struggling to keep up with all this data creation, leading IT teams to investigate new solutions to keep ahead and take advantage of the data boom.

    Some of the main challenges are understanding performance, removing data throughput bottlenecks and being able to plan for future capacity. Architecture can often lock businesses in to legacy solutions, and performance needs can vary and change as data sets grow.

    Architectures designed and built around NVMe(non-volatile memory express) can provide the perfect balance, particularly for data-intensive applications that demand fast performance. This is extremely important for organizations that are dependent on speed, accuracy and real-time data insights.

    Industries such as healthcare, autonomous vehicles, artificial intelligence(AI)/machine learning(ML) and Genomics are at the forefront of the transition to high performance NVMe storage solutions that deliver fast data access for high performance computing systems that drive new research and innovations.


    With traditional storage architectures, detailed genome analysis can take upwards of five days to complete, which makes sense considering an initial analysis of one person’s genome produces approximately 300GB - 1TB of data, and a single round of secondary analysis on just one person’s genome can require upwards of 500TB storage capacity. However, with an NVMe solution implemented it’s possible to get results in just one day.

    In a typical study, genome research and life sciences companies need to process, compare and analyze the genomes of between 1,000 and 5,000 people per study. This is a huge amount of data to store, but it’s imperative that it’s done. These studies are working toward revolutionary scientific and medical advances, looking to personalize medicine and provide advanced cancer treatments. This is only now becoming possible thanks to the speed that NVMe enables researchers to explore and analyze the human genome.

    Autonomous vehicles

    A growing trend in the tech industry is the one of autonomous vehicles. Self-driving cars are the next big thing, and various companies are working tirelessly to perfect the idea. In order to function properly, these vehicles need very fast storage to accelerate the applications and data that ‘drive’ autonomous vehicle development. Core requirements for autonomous vehicle storage include:

    • Must have a high capacity in a small form factor
    • Must be able to accept input data from cameras and sensors at “line rate” – AKA have extremely high throughput and low latency
    • Must be robust and survive media or hardware failures
    • Must be “green” and have minimal power footprint
    • Must be easily removable and reusable
    • Must use simple but robust networking

    What kind of storage meets all these requirements? That’s right – NVMe.

    Artificial Intelligence

    Artificial Intelligence (AI) is gaining a lot of traction in a variety of industries varying from financial to manufacturing, and beyond. In financial, AI does things like predict investment trends. In manufacturing, AI-based image recognition software checks for defects during product assembly. Wherever it’s used, AI needs a high level of computing power, coupled with a high-performance and low-latency architecture in order to enable parallel processing power of data in real-time.

    Once again, NVMe steps up to the plate, providing the speed and processing power that is critical during training and inference. Without NVMe to prevent bottlenecks and latency issues, these stages can take much, much longer. Which, in turn, can lead to the temptation to take shortcuts, causing software to malfunction or make incorrect decisions down the line.

    The rapid increase of data creation has put traditional storage architectures under high pressure due to its lack of scalability and flexibility, both of which are required to fulfill future capacity and performance requirements. This is where NVMe comes in, breaking the barriers of existing designs by offerings unanticipated density and performance. The breakthroughs that NVMe is able to offer contain the requirements needed to help manage and maintain the data boom.

    Author: Ron Herrmann

    Source: Dataversity


  • Hoe werkt augmented intelligence?

    artificial-intelligenceComputers en apparaten die met ons meedenken zijn al lang geen sciencefiction meer. Artificial intelligence (AI) is terug te vinden in wasmachines die hun programma aanpassen aan de hoeveelheid was en computerspellen die zich aanpassen aan het niveau van de spelers. Hoe kunnen computers mensen helpen slimmer te beslissen? Deze uitgebreide whitepaper beschrijft welke modellen in het analyseplatform HPE IDOL worden toegepast.

    Mathematische modellen zorgen voor menselijke maat

    Processors kunnen in een oogwenk een berekening uitvoeren waar mensen weken tot maanden mee bezig zouden zijn. Daarom zijn computers betere schakers dan mensen, maar slechter in poker waarin de menselijke maat een grotere rol speelt. Hoe zorgt een zoek- en analyseplatform ervoor dat er meer ‘mens’ in de analyse terechtkomt? Dat wordt gerealiseerd door gebruik te maken van verschillende mathematische modellen.

    Analyses voor tekst, geluid, beeld en gezichten

    De kunst is om uit data actiegerichte informatie te verkrijgen. Dat lukt door patroonherkenning in te zetten op verschillende datasets. Daarnaast spelen classificatie, clustering en analyse een grote rol bij het verkrijgen van de juiste inzichten. Niet alleen teksten worden geanalyseerd, steeds vaker worden ook geluidsbestanden en beelden, objecten en gezichten geanalyseerd.

    Artificial intelligence helpt de mens

    De whitepaper beschrijft uitvoerig hoe patronen worden gevonden in tekst, audio en beelden. Hoe snapt een computer dat de video die hij analyseert over een mens gaat? Hoe wordt van platte beelden een geometrisch 3d-beeld gemaakt en hoe beslist een computer wat hij ziet? Denk bijvoorbeeld aan een geautomatiseerd seintje naar de controlekamer als het te druk is op een tribune of een file ontstaat. Hoe helpen theoretische modellen computers als mensen waarnemen en onze beslissingen ondersteunen? Dat en meer leest u in de whitepaper Augmented intelligence Helping humans make smarter decisions. Zie hiervoor AnalyticsToday

    Analyticstoday.nl, 12 oktober 2016

  • How AI is influencing web design

    How AI is influencing web design

    Artificial intelligence in web design is making a major impact. This is what to know about how it works and how effective it can be.

    When Alan Turing invented the first intelligent machine, few could have predicted that the advanced technology would become as widespread and ubiquitous as it is today.

    Since then, companies have adopted AI (artificial intelligence) for pretty much everything, from self-driving cars to medical technology to banking. We live in the age of big data, an age in which we use machines to collect and analyze massive amounts of data in a way that humans couldn’t do on their own. In many respects, the cognition of machines is already surpassing that of humans.

    With the explosion of the internet, AI has also become a critical element of web design. Artificial intelligence has helped with everything from the building and customization of websites and brands to the way users experience those websites themselves.

    Here are some of the ways AI is making web design increasingly sophisticated:

    AI designs websites

    Artificial design intelligence (ADI) tools are the building blocks of many of today’s websites. These days, ADI systems have evolved into effective tools with functional and attractive results. Wix and Bookmark, for example, offer popular automated website building tools with customizable options. Designers, developers, and everyday entrepreneurs no longer have to build websites from the ground up, nor do they need to spend hours choosing the perfect template. Instead, both Wix and Bookmark claim that websites can intelligently design themselves, using nothing more than the site’s name and the answers to a few quick questions.

    Not only does AI help engineer the web building process, but it’s also become the designer behind the brand names and logos that dominate a website’s home page. Companies are turning to artificial intelligence to automate their branding process, using AI tools like Tailor Brands to design their own customized logos in seconds. In this way, AI has made good web design more accessible and affordable for big companies and small-scale entrepreneurs alike.

    AI enhances user experience

    AI isn’t just changing web design on the developer end, it’s changing the way users experience websites, too. AI is the force behind the chatbots that offer conversation or assistance on many companies’ websites. While conversations with chatbots once felt frustrating, repetitive, and a little too robotic, more sophisticated AI-powered chatbots use natural language processing (NLP) to have more natural, authentic conversations and to genuinely “understand” their customers’ needs. Sephora’s chatbot Kik is one example of a powerful NLP chatbot that understands customers’ beauty needs and provides them with recommendations based on these needs.

    In addition to the practical value of chatbots, the prevalence of chatbots indicates an increasing shift towards customer-focused websites, ones that prioritize drawing customers in over getting their message out. With the emergence of AI chatbots, websites have transformed into customer engagement platforms, where customers can offer their feedback, ask for help, or find products or services suited to their preferences.

    AI analyzes results

    We’ve seen how AI has benefitted both website building and user experience. A third way AI is affecting web design is by making possible analytics tools that help companies analyze their results and refine their websites accordingly.

    By crunching down big data into analyzable numbers and patterns, predictive analytics tools like TensorFlow and Infosys Nia reveal real-time insights about what does and doesn’t work for website visitors and prospective customers. This enables businesses to understand which types of customers are drawn to their site, and to accommodate those visitors with a seamless user experience. Using results from AI-powered analytics platforms, web developers and designers are able to tweak and refine their site and make it increasingly user-friendly.

    AI in web design: where is it heading next?

    AI is already being used in web design to make site building and design easier and more accessible, to enhance UX and further user engagement, and to drive site improvement through big data analytics. As artificial intelligence becomes even more advanced, affordable, and widespread, it will continue to affect web design in ways we can only imagine. Will improved natural language processing make chatbots indistinguishable from human representatives? Will websites readily adapt, real-time, to users’ preferences and needs? Whatever happens, AI is already the new normal.

    Author: Diana Hope

    Source: SmartDataCollective

  • How artificial intelligence will shape the future of business

    How artificial intelligence will shape the future of business

    From the boardroom at the office to your living room at home, artificial intelligence (AI) is nearly everywhere nowadays. Tipped as the most disruptive technology of all time, it has already transformed industries across the globe. And companies are racing to understand how to integrate it into their own business processes.

    AI is not a new concept. The technology has been with us for a long time, but in the past, there were too many barriers to its use and applicability in our everyday lives. Now improvements in computing power and storage, increased data volumes and more advanced algorithms mean that AI is going mainstream. Businesses are harnessing its power to reinvent themselves and stay relevant in the digital age.

    The technology makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. It does this by processing large amounts of data and recognising patterns. AI analyses much more data than humans at a much deeper level, and faster.

    Most organisations can’t cope with the data they already have, let alone the data that is around the corner. So there’s a huge opportunity for organisations to use AI to turn all that data into knowledge to make faster and more accurate decisions.

    Customer experience

    Customer experience is becoming the new competitive battleground for all organisations. Over the next decade, businesses that dominate in this area will be the ones that survive and thrive. Analysing and interpreting the mountains of customer data within the organisation in real time and turning it into valuable insights and actions will be crucial.

    Today most organisations are using data only to report on what their customers did in the past. SAS research reveals that 93% of businesses currently cannot use analytics to predict individual customer needs.

    Over the next decade, we will see more organisations using machine learning to predict future customer behaviours and needs. Just as an AI machine can teach itself chess, organizations can use their existing massive volumes of customer data to teach AI what the next-best action for an individual customer should be. This could include what product to recommend next or which marketing activity is most likely to result in a positive response.

    Automating decisions

    In addition to improving insights and making accurate predictions, AI offers the potential to go one step further and automate business decision making entirely.

    Front-line workers or dependent applications make thousands of operational decisions every day that AI can make faster, more accurately and more consistently. Ultimately this automation means improving KPIs for customer satisfaction, revenue growth, return on assets, production uptime, operational costs, meeting targets and more.

    Take Shop Direct for example, which owns the Littlewoods and Very brands. This approach saw Shop Direct’s profits surge by 40%, driven by a 15.9% increase in sales from Very.co.uk. It uses AI from SAS to analyse customer data in real time and automate decisions to drive groundbreaking personalisation at an individual customer level.

    AI is here. It’s already being adopted faster than the arrival of the internet. And it’s delivering business results across almost every industry today. In the next decade, every successful company will have AI. And the effects on skills, culture and structure will deliver superior customer experiences.

    Author: Tiffany Carpenter

    Source: SAS

  • How autonomous vehicles are driven by data

    How autonomous vehicles are driven by data

    Understanding how to capture, process, activate, and store the staggering amount of data each vehicle is generating is central to realizing the future of  autonomous vehicles (AVs).

    Autonomous vehicles have long been spoken about as one of the next major transformations for humanity. And AVs are already a reality in delivery, freight services, and shipping, but the day when a car is driving along the leafy suburbs with no one behind the wheel, or level five autonomy as it’s also known, is still far off in the future.

    While we are a long way off from having AVs on our roads, IHS Markit reported last year that there will be more than 33 million autonomous vehicles sold globally in 2040. So, the revolution is coming. And it’s time to be prepared.

    Putting some data in the tank

    As with so many technological advancements today, data is critical to making AVs move intelligently. Automakers, from incumbents to Silicon Valley startups, are running tests and racking up thousands of miles in a race to be the leader in this field. Combining a variety of sensors to recognize their surroundings, each autonomous vehicle uses radar, lidar, sonar and GPS, to name just a few technologies, to navigate the streets and process what is around them to drive safely and efficiently. As a result, every vehicle is generating a staggering amount of data.

    According toa report by Accenture, AVs today generate between 4 and 6 terabytes (TBs) of data per day, with some producing as much as 8 to 10 TBs depending on the number of mounted devices on the vehicle. The report says that on the low end, that means the data generated from one test car in one day is roughly the equivalent to that of nearly 6,200 internet users.

    While it can seem a little overwhelming, this data contains valuable insights and ultimately holds the key in getting AVs on the road. This data provides insights into how an AV identifies navigation paths, avoids obstacles, and distinguishes between a human crossing the road or a trash can that has fallen over in the wind. In order to take advantage of what this data can teach us though, it must be collected, downloaded, stored, and activated to enhance the decision-making capabilities of each vehicle. By properly storing and managing this data, you are providing the foundation for progress to be made securely and speedily.

    Out of the car, into the ecosystem

    The biggest challenge facing AV manufacturers right now is testing. Getting miles on the clock and learning faster than competitors to eliminate errors, reach deadlines, and get one step closer to hitting the road. Stepping outside of the car, there is a plethora of other elements to be considered from a data perspective that are critical to enabling AVs.

    Not only does data need to be stored and processed in the vehicle, but also elsewhere on the edge and some of it at least, in the data center. Test miles are one thing, but once AVs hit the road for real, they will need to interact in real-time with the streets they are driving on. Hypothetically speaking, you might imagine that one day gas stations will be replaced by mini data centers on the edge, ensuring the AVs can engage with their surroundings and carry out the processing required to drive efficiently.

    Making the roads safer

    While it might seem that AVs are merely another technology humans want to use to make their lives easier, it’s worth remembering some of the bigger benefits. The U.S. National Highway Traffic Safety Administration has stated that with human error being the major factor in 94% of all fatal accidents, AVs have the potential to significantly reduce highway fatalities by addressing the root cause of crashes.

    That’s not to say humans won’t be behind the wheel at all in 20 years, but as artificial intelligence (AI) and deep learning (DL) have done in other sectors, they will augment our driving experience and look to put a serious dent in the number of fatal road accidents every year, which currently stands at nearly 1.3 million.

    Companies in the AV field understand the potential that AI and DL technology represents. Waymo, for example, shared one of its datasets in August 2019 with the broader research community to enable innovation. With data containing test miles in a wide variety of environments, from day and night, to sunshine and rain, data like this can play a pivotal role in preparing cars for all conditions and maintaining safety as the No. 1 priority.

    Laying the road ahead

    Any company manufacturing AVs or playing a significant role in the ecosystem, from edge to core, needs to understand the data requirements and implement a solid data strategy. By getting the right infrastructure in place ahead of time, AVs truly can become a reality and bring with them all the anticipated benefits, from efficiency of travel to the safety of pedestrians.

    Most of the hardware needed is already there: radars, cameras, lidar, chips and, of course, storage. But understanding how to capture, process, activate, and store the data created is central to realizing the future of AVs. Data is the gas in the proverbial tank, and by managing this abundant resource properly, you might just see that fully automated car in your neighborhood sooner than expected.

    Author: Jeff Fochtman

    Source: Informationweek

  • How to create a trusted data environment in 3 essential steps

    How to create a trusted data environment in 3 essential steps

    We are in the era of the information economy. Nowadays, more than ever, companies have the capabilities to optimize their processes through the use of data and analytics. While there are endless possibilities wjen it comes to data analysis, there are still challenges with maintaining, integrating, and cleaning data to ensure that it will empower people to take decisions.

    Bottom up, top down? What is the best?

    As IT teams begin to tackle the data deluge, a question often asked is: should this problem be approached from the bottom up or top down? There is no “one-size-fits-all” answer here, but all data teams need a high-level view to help you get a quick view of your data subject areas. Think of this high-level view as a map you create to define priorities and identify problem areas for your business within the modern day data-based economy. This map will allow you to set up a phased approach to optimize your most value contributing data assets.

    The high-level view unfortunately is not enough to turn your data into valuable assets. You also need to know the details of your data.

    Getting the details from your data is where a data profile comes into play. This profile tells you what your data is from the technical perspective. The high-level view (the enterprise information model), gives you the view from the business perspective. Real business value comes from the combination of both views. A transversal, holistic view on your data assets, allowing to zoom in or zoom out. The high-level view with technical details (even without the profiling) allows to start with the most important phase in the digital transformation: Discovery of your data assets.

    Not only data integration, but data integrity

    With all the data travelling around in different types and sizes, integrating the data streams across various partners, apps and sources have become critical. But it’s more complex than ever.

    Due to the sizes and variety of data being generated, not to mention the ever-increasing speed in go to market scenarios, companies should look for technology partners that can help them achieve this integration and integrity, either on premise or in the cloud.

    Your 3 step plan to trusted data

    Step 1: Discover and cleanse your data

    A recent IDC study found that only 19% of a data professional’s time is spent analyzing information and delivering valuable business outcomes. They spend 37% of their time preparing data and 24% of their time goes to protecting data. The challenge is to overcome these obstacles by bringing clarity, transparency, and accessibility to your data assets.

    Building this discovery platform, which at the same time allows you to profile your data, to understand the quality of your data and build a confidence score to build trust with the business using the data assets, comes under the form of an auto-profiling data catalog.

    Thanks to the application of Artificial Intelligence (AI) and Machine Learning (ML) in the data catalogs, data profiling can be provided as self-service towards power users.

    Bringing transparency, understanding, and trust to the business brings out the value of the data assets.

    Step 2: Organize data you can trust and empower people

    According to the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms, 2017: “By 2020, organizations that offer users access to a curated catalog of internal and external data will realize twice the business value from analytics investments than those that do not.”

    An important phase in a successful data governance framework is establishing a single point of trust. From the technical perspective this translates to collecting all the data sets together in a single point of control. The governance aspect is the capability to assign roles and responsibilities directly in the central point of control, which allows to instantly operationalize your governance from the place the data originates.

    The organization of your data assets goes along with the business understanding of the data, transparency and provenance. The end to end view of your data lineage ensures compliance and risk mitigation.

    With the central compass in place and the roles and responsibilities assigned, it’s time to empower the people for data curation and remediation, in which an ongoing communication is from vital importance for adoption of a data driven strategy.

    Step 3: Automate your data pipelines & enable data access

    Different layers and technologies make our lives more complex. It is important to keep our data flows and streams aligned and adopt to swift and quick changes in business needs.

    The needed transitions, data quality profiling and reporting can extensively be automated.

    Start small and scale big. A part of intelligence these days can be achieved by applying AI and ML. These algorithms can take the cumbersome work out of the hands of analysts and can also be better and easier scaled. This automation gives the analysts faster understanding of the data and build better faster and more insights in a given time.

    Putting data at the center of everything, implementing automation and provisioning it through one single platform is one of the key success factors in your digital transformation and become a real data-driven organization.

    Source: Talend

  • How to improve your business processes with Artificial Intelligence?

    How to improve your business processes with Artificial Intelligence?

    In the age of digital disruption, even the world’s largest companies aren’t impervious to agile competitors that move quick, iterate fast, and have the capacity to build products faster than their peers. That’s why many legacy organizations are taking a closer look at business process management.

    Simply speaking, business process management is the practice of reengineering existing systems in your firm for better productivity and efficiency. It takes a proactive approach towards identifying business problems and the steps needed to rectify them. And while business process management has traditionally been the forte of management consultants and other functional experts, rapid advancements in artificial intelligence and big data means this sector is also undergoing a fundamental transformation.

    So it begs the question: how do you start “plugging AI” into your company’s existing data and systems?

    Where to begin?

    Artificial intelligence is exciting because it promises to introduce a totally new way to business operations. However, most traditional organizations don’t have the necessary infrastructure and/or computing power to deploy these technologies.

    Moving your data and applications to the cloud is a very popular solution to unlocking the necessary computing resources, but there's a catch. You can’t just copy-paste your files to the cloud and start using AI. Older systems weren’t built with a cloud deployment in mind, so leveraging the cloud usually requires rebuilding your existing software using a common cloud-ready platform like Kubernetes, Pivotal Cloud, and Docker Swarm.

    The point is that once you make a decision towards digital transformation, you need complete buy-in from all areas of the business and a commitment to process and technology changes. Getting that commitment typically involves showcasing the real benefits that AI can unlock. Let’s take a closer look at how artificial intelligence is actively impacting the way companies do their business.

    1. Analyzing sales calls

    When it comes to simulating business processes and operations one crucial aspect is definitely sales calls. That’s because sales, and the ensuing revenue that comes from it, are the bread and butter of your business. Top-tier sales representatives will ensure your firm keeps chugging along and reaching new boundaries.

    In the past, analyzing sales calls was a manual process. There might have been a standard sales playbook with generic questions that each individual would be expected to ask. But now, AI conversational tools like Gong are automating this process entirely.

    Gong is able to record each outbound sales call that your team makes and pick up on cues that help it determine how the call went. So, a successful sales call will probably see the prospect talking more than the sales rep, for example.

    2. Converting voicemail into text

    Have you ever heard the phrase: “Your unhappiest customers are your greatest source of learning?” These famous words were said by none other than Bill Gates. But how can you even accurately quantify customer sentiment if you don’t take the requisite steps to track it?

    It’s certainly possible that a large chunk of your customers don’t want to remain on hold while waiting for a customer support agent and prefer to leave a voicemail instead. Intelligent automation tools like Workato are making it possible to automate voicemail follow-ups, thereby ensuring that no customer falls through the cracks and each one is given an appropriate response to their concerns.

    For example, Workato was able to help automate voicemail follow-ups for a large chain of cafes. Whenever a new voicemail came into its system, the intelligent tool would use speech to text conversion to create a transcript of the voicemail. It would then take that text and add it on the service ticket, giving customer support agents a much better idea of the nature of the complaint and allowing them to resolve it quicker.

    3. Detecting fraud

    Occupational fraud causes organizations to lose about 5% of their total revenue every year with a potential total loss of $3.5 trillion. Machine learning algorithms are actively quelling this trend by spotting discrepancies and anomalies in everyday processes.

    For example, banks and financial institutions use intelligent algorithms to detect suspicious money transfers and payments. This process is also applicable in cybersecurity, tax evasion, customs clearing processes, insurance, and other fields. Large-scale organizations that are able to leverage AI are potentially looking at cost savings in the millions of dollars each year. These resources can then be spent in other critical areas of business such as research and development so companies can stay competitive and ahead of the curve.


    Artificial intelligence isn’t just a fancy buzzword that people are tossing around with willful abandon. In fact, every time you take advantage of Google’s typo detection feature (when you see ‘did you mean’ in the search engine) you’re actually plugging into its DeepMind platform, an example of AI in everyday use.

    AI has the potential to promote greater efficiency, output, less interruption, and, ultimately, higher revenue across businesses of all shapes and sizes.

    Author: Santana Wilson

    Source: Oracle

  • How to use AI image recognition responsibly?

    How to use AI image recognition responsibly?

    The use of artificial intelligence (AI) for image recognition offers great potential for business transformation and problem-solving. But numerous responsibilities are interwoven with that potential. Predominant among them is the need to understand how the underlying technologies work, and the safety and ethical considerations required to guide their use.

    Regulations Coming for image, face, and voice recognition?

    Today, governance regulations have sprung up worldwide that dictate how an individual’s personal information is held, used and who owns it. General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are examples of regulations designed to address data and security challenges faced by consumers and the businesses that possess their associated data. If laws now apply to personal data information, can regulations governing image and facial recognition (technology that can identify a person’s face and voice, the most personal 'information' we possess) be far behind? Further regulations are likely coming, but organizations shouldn’t wait to plan and direct their utilization. Businesses need to follow how this technology is being both used and misused, and then proactively apply guidelines that govern how to use it effectively, safely, and ethically.

    The use and misuse of technology

    Many organizations use recognition capabilities in helpful and transformative ways. Medical imaging is a prime example. Through machine learning, predictive algorithms come to recognize tumors more accurately and faster than human doctors can. Autonomous vehicles use image recognition to detect road signs, traffic signals, other traffic, and pedestrians. For industrial manufacturers and utilities, machines have learned how to recognize defects in things like power lines, wind turbines, and offshore oil rigs through the use of drones. This ability removes humans from what can sometimes be dangerous environments, improving safety, enabling preventive maintenance, and increasing frequency and thoroughness of inspections. In the insurance field, machine learning helps process claims for auto and property damage after catastrophic events, which improves accuracy and limits the need for humans to put themselves in potentially unsafe conditions.

    Just as most technologies can be used for good, there are always those who seek to use them intentionally for ignoble or even criminal reasons. The most obvious example of the misuse of image recognition is deepfake video or audio. Deepfake video and audio use AI to create misleading content or alter existing content to try to pass off something as genuine that never occurred. An example is inserting a celebrity’s face onto another person’s body to create a pornographic video. Another example is using a politician’s voice to create a fake audio recording that seems to have the politician saying something they never actually said.

    In-between intentional beneficial use and intentional harmful use, there are gray areas and unintended consequences. If an autonomous vehicle company used only one country’s road signs as the data to teach the vehicle what to look for, the results might be disastrous if the technology is used in another country where the signs are different. Also, governments use cameras to capture on-street activity. Ostensibly, the goal is to improve citizen safety by building a database of people and identities. What are the implications for a free society that now seems to be under public surveillance? How does that change expectations of privacy? What happens if that data is hacked?

    Why take proactive measures?

    Governments and corporate governance bodies likely will create guidelines and laws that apply to these types of tools. There are a number of reasons why businesses should proactively plan for how they create and use these tools now before these laws to come into effect.

    Physical safety is a prime concern. If an organization creates or uses these tools in an unsafe way, people could be harmed. Setting up safety standards and guidelines protects people and also protects the business from legal action that may result from carelessness.

    Customers demand accountability from companies that use these technologies. They expect their personal data to be protected, and that expectation will extend to their image and voice information as well. Transparency helps create trust and that trust will be necessary for any business to succeed in the field of image recognition.

    Putting safety and ethics guidelines in place now, including establishing best practices such as model audits and model interpretability, may also give a business a competitive advantage by the time laws governing these tools are passed. Other organizations will be playing catch-up while those who have planned ahead gain market share over their competitors.

    Author: Bethann Noble

    Source: Cloudera

  • Human actions more important than ever with historically high volumes of data

    Human actions more important than ever with historically high volumes of data

    IDC predicts that our global datasphere: the digital data we create, capture, replicate and consume, will grow from approximately 40 zettabytes of data in 2019 to 175 zettabytes in 2025, and that 60% of this data will be managed by enterprises.

    To both manage and make use of this near-future data deluge, enterprise organizations will increasingly rely on machine learning and AI. But IDC Research Director Chandana Gopal says this doesn’t mean that the importance of humans in deriving insights and decision making will decrease. In fact, the opposite is true.

    'As volumes of data increase, it becomes vitally important to ensure that decision makers understand the context and trust the data and the insights that are being generated by AI/ML, sometimes referred to as thick data', says Gopal in 10 Enterprise Analytics Trends to Watch in 2020.

    In an AI automation framework published by IDC, we state that it is important to evaluate the interaction of humans and machines by asking the following three questions:

    1. Who analyzes the data?
    2. Who decides based on the results of the analysis?
    3. Who acts based on the decision?

    'The answers to the three questions above will guide businesses towards their goal of maximizing the use of data and augmenting the capabilities of humans in effective decision making. There is no doubt that machines are better suited to finding patterns and correlations in vast quantities of data. However, as it is famously said, correlation does not imply causation, and it is up to the human (augmented with ML) to determine why a certain pattern might occur'.

    Training employees to become data literate and conversant with data ethnography should be part of every enterprise organization’s data strategy in 2020 and beyond, advises Gopal. As more and more decisions are informed and made by machines, it’s vital that humans understand the how and why.

    Author: Tricia Morris

    Source: Microstrategy

  • In een intelligente organisatie is er altijd plaats voor een chatbot in HR

    In een intelligente organisatie is er altijd plaats voor een chatbot in HR

    Mensen vormen het hart van een bedrijf, en de afdeling Human Resources is er om voor die mensen te zorgen. HR is de bewaker van de cultuur en zorgt dat werknemers mogelijkheden krijgen om te groeien. Het houdt het bedrijf levendig en gezond. HR draait dus om mensen. Is een virtuele assistent, oftewel een chatbot, tussen al deze mensen wel op zijn plek?

    Hoewel HR draait om de mensen binnen een organisatie, besteden HR-medewerkers ongeveer een vierde van hun tijd aan administratieve taken. Het beantwoorden van vragen van medewerkers is bijvoorbeeld een dagelijks terugkerende taak. Vragen als ‘hoeveel vakantiedagen heb ik nog?’ of ‘wat zijn de regels rond ziekteverlof?’ komen bijna dagelijks aan bod. Een chatbot kan al die vragen van medewerkers beantwoorden. Dit ontziet niet alleen de HR-manager, maar het schept ook direct duidelijkheid voor medewerkers die de vragen stellen. Nooit meer de frustraties van lang wachten op een antwoord op een simpele vraag. Dat klinkt goed toch?

    Een chatbot kan de gestelde vragen ook nauwkeurig bijhouden, om zo knelpunten in het HR-beleid op te merken. Daarnaast wordt een chatbot met de hulp van kunstmatige intelligentie steeds slimmer, naarmate hij meer vragen krijgt. De antwoorden die hij geeft zullen elke dag beter en nauwkeuriger worden. Dit wirdt ook wel machine learning genoemd.

    Persoonlijke antwoorden voor specifieke situaties

    Vooral het aanvragen van verlof is een administratieve taak die vaak veel tijd kost. Denk aan het aanvragen van zwangerschapsverlof bijvoorbeeld. Een bot kan persoonlijke antwoorden en oplossingen geven voor deze specifieke aanvraag.

    Ook tijdens ziekte kan de chatbot een rol spelen. Een van de belangrijkste taken van HR is het zorgen voor een gemotiveerd personeel. Om hieraan bij te dragen kan een chatbot bijvoorbeeld een ‘beterschap’ boodschap sturen als iemand zich ziek meldt. De virtuele assistent kan ook vragen en bijhouden hoe het met diegene gaat, om zo het herstel in het oog te houden.

    Sollicitatieprocedures gladstrijken met een chatbot

    Gezien de huidige arbeidsmarkt is het vaak lastig om nieuw personeel te vinden. Het is daarom essentieel dat het sollicitatieproces vlekkeloos verloopt. Een chatbot kan dit optimaliseren door vragen van een sollicitant direct te beantwoorden. Na het beantwoorden van een vraag, kan de chatbot zelf waardevolle data verzamelenover de sollicitant. De bot slaat de antwoorden op zodat het eenvoudiger wordt om kandidaten te screenen. Niet alleen het leven van de recruiter wordt zo makkelijker, ook dat van de sollicitant.

    Het grootste deel, ongeveer 80%, van mensen die ergens solliciteren, overweegt ergens anders heen te gaan als ze tijdens het proces niet regelmatig updates krijgen over hun sollicitatie. Ze blijven wel aan boord als ze regelmatig op de hoogte gehouden worden over hoe het ervoor staat. Een bot kan een sollicitant op de hoogte houden en zo het proces van recruitment op een positieve noot beginnen. Nadat de sollicitant de selectieprocedure heeft doorlopen en zijn of haar proeftijd in gaat, begint de onboarding. De onboarding is een belangrijke periode om ervoor te zorgen dat een nieuwe medewerker zo snel mogelijk mee kan draaien in de organisatie. In plaats van te werken via een checklist kan de chatbot een groot deel van de onboarding van de HR overnemen en kan de medewerker snel zelf aan de slag. Doordat alle documenten en informatie klaargezet worden in de chatbot kan HR zich meer focussen op het persoonlijke aspect van de onboarding.

    Chatbot voor HR, meer ruimte voor mensen

    Ondanks de opkomst van nieuwe technologie is de wereld van HR er eentje die draait om mensen. Mensen die tijd nodig hebben om er voor elkaar te zijn, in plaats van dat ze zich constant bezig moeten houden met administratieve taken. HR moet zich kunnen richten op de ontwikkeling van medewerkers en als mentor kunnen optreden. HR moet de perfecte nieuwe collega kunnen vinden en de doelen van de organisatie nastreven. Door de inzet van een chatbot kan juist het werk uit handen genomen worden dat zoiets in de weg staat. Zo kan een bedrijf zich niet alleen richten op wat belangrijk is, maar kan het ook zijn medewerkers de ruimte geven te doen waar ze goed in zijn door altijd paraat te staan met de juiste informatie en het juiste advies. Daarom heeft een intelligente organisatie altijd plaats voor een chatbot in HR.

    Auteur: Joris Jonkman

    Bron: Emerce

  • Integrating security, compliance, and session management when deploying AI systems

    Integrating security, compliance, and session management when deploying AI systems

    As enterprises adopt AI (artificial intelligence), they'll need a sound deployment framework that enables security, compliance, and session management.

    As accessible as the various dimensions of AI are to today's enterprise, one simple fact remains: embedding scalable AI systems into core business processes in production depends on a coherent deployment framework. Without it, AI's potential automation and acceleration benefits almost certainly become liabilities, or will never be fully realized.

    This framework functions as a guardrail for protecting and managing AI systems, enabling their interoperability with existing IT resources. It's the means by which AI implementations with intelligent bots interact with one another for mission-critical processes.

    With this method, bots are analogous to railway cars transporting data between sources and systems. The framework is akin to the tracks the cars operate on, helping the bots to function consistently and dependably. It delivers three core functions:

    • Security
    • Compliance and data governance
    • Session management

    With this framework, AI becomes as dependable as any other well-managed IT resource. The three core functions each need to be supported as follows.


    A coherent AI framework primarily solidifies a secure environment for applied AI. AI is a collection of various cognitive computing technologies: machine learning, natural language processing (NLP), etc. Applied AI is the application of those technologies to fundamental business processes and organizational data. Therefore, it's imperative for organizations to tailor their AI frameworks to their particular security needs in accordance with measures such as encryption or tokenization.

    When AI is subjected to these security protocols the same way employees or other systems are, there can be secure communication between the framework and external resources. For example, organizations can access optical character recognition (OCR) algorithms through AWS or cognitive computing options from IBM's Watson while safeguarding their AI systems.

    Compliance (and data governance)

    In much the same way organizations personalize their AI frameworks for security, they can also customize them for the various dimensions of regulatory compliance and data governance. Of cardinal importance is the treatment of confidential, personally identifiable information (PII), particularly with the passage of GDPR and other privacy regulations.

    For example, when leveraging NLP it may be necessary to communicate with external NLP engines. The inclusion of PII in such exchanges is inevitable, especially when dealing with customer data. However, the AI framework can be adjusted so that when PII is detected, it's automatically compressed, mapped, and rendered anonymous so bots deliver this information only according to compliance policies. It also ensures users can access external resources in accordance with governance and security policies.

    Session management

    The session management capabilities of coherent AI frameworks are invaluable for preserving the context between bots for stateful relevance of underlying AI systems. The framework ensures communication between bots is pertinent to their specific functions in workflows.

    Similar to how DNA is passed along, bots can contextualize the data they disseminate to each other. For example, a general-inquiry bot may answer users' questions about various aspects of a job. However, once someone applies for the position, that bot must understand the context of the application data and pass it along to an HR bot. The framework provides this session management for the duration of the data's journey within the AI systems.

    Key benefits

    The outputs of the security, compliance, and session management functions respectively enable three valuable benefits:

    No rogue bots: AI systems won't go rogue thanks to the framework's security. The framework ingrains security within AI systems, extending the same benefits for data privacy. This can help you comply with today's strict regulations in countries such as Germany and India about where data is stored, particularly data accessed through the cloud. The framework prevents data from being stored or used in ways contrary to security and governance policies, so AI can safely use the most crucial system resources.

    New services: The compliance function makes it easy to add new services external to the enterprise. Revisiting the train analogy, a new service is like a new car on the track. The framework incorporates it within the existing infrastructure without untimely delays so firms can quickly access the cloud for any necessary services to assist AI systems.

    Critical analytics: Finally, the session management function issues real-time information about system performance, which is important when leveraging multiple AI systems. It enables organizations to define metrics relevant to their use cases, identify anomalies, and increase efficiency via a machine-learning feedback loop with predictions for optimizing workflows.

    Necessary advancements

    Organizations that develop and deploy AI-driven business applications that can think, act, and complete processes autonomously without human intervention will need a sound deployment framework. Delivering a road map for what data is processed as well as how, where, and why, the framework aligns AI with an organization's core values and is vital to scaling these technologies for mission-critical applications. It's the foundation for AI's transformative potentialand, more important, its enduring value to the enterprise.

    Source: Ramesh Mahalingam

    Author: TDWI

  • Intelligence, automation, or intelligent automation?

    Intelligence, automation, or intelligent automation?

    There is a lot of excitement about artificial intelligence (AI), and also a lot of fear. Let’s set aside the potential for robots to take over the world for the moment and focus on more realistic fears. There is a growing acceptance that AI will change the way we work. There is also agreement that it is likely to result in a number of jobs disappearing or being replaced by AI systems, and others appearing.

    This has fueled the discussion on the ethics around intelligence, especially AI. Thoughtful commentators note that it is unwise to separate the two. Some have suggested frameworks for the ethical development of AI. Underpinning ethical discussion, however, is a question of what AI will be used for exactly. It is hard to develop an ethics framework out of the blue. In this blog, this issue will be unpicked a little, sharing thoughts about where and how AI is used and how this will affect the value that businesses obtain from AI.

    Defining intelligence

    Artfiicial Intelligence has been defined as the ability of a system to interpret data, learn from it, and then use what it has learnt to adapt and therefore achieve particular tasks. There are therefore three elements to AI:

    1. The system has to correctly interpret data and draw the right conclusions.

    2. It must be able to learn from its interpretation.

    3. It must then be able to use what it has learnt to achieve a task. Simply being able to learn or, indeed, to interpret data or perform a task is not enough to make a system AI-based.

    As consumers, most of our contact with AI is with systems like Alexa and Siri. These are definitely "intelligent," in that they take in what we say, interpret it, learn from experience and perform tasks correctly as a result. However, in business, there is general acceptance that much of the real value from AI will come from automation. In other words, AI will be used to mimic or replace human actions. This is now becoming known as 'intelligent automation'.

    Where does intelligent start and automation stop though? There are plenty of tasks that can be automated simply and easily, without any need for an intelligent system. A lot of the time the ability to automate tasks is overshadowing the need for intelligence to drive the automation. This typically results in very well-integrated systems, which often have decision-making capabilities. However, the quality of those decisions is often ignored.

    Good AI algorithms can suggest extremely good options for decisions. Ignoring this limits the value that companies can get out of their investments in AI. Equally, failing to consider whether the quality of the decision is good enough can lead to poor decisions being made. This undermines trust in the algorithm. This results in less use for decisions, again reducing the value. But how can you assess and ensure the quality of the decisions made or recommended by the algorithm?

    Balancing automation and intelligence

    An ideal AI deployment should have a balance between automation and intelligence. If you lean too much towards the automation side and rely on simple rules-based automation, all you will be able to do is collect all the low-hanging fruit in this case. You will therefore miss out on the potential to use the AI system to support more sophisticated decision making. Lean too much towards other direction though, and you get intelligence without automation or systems like Alexa and Siri. Useful for consumers, but not so much for businesses.

    In business, analytics needs to be at the heart of an AI system. The true measure of a successful AI deployment lies in being able to mimic both human action and human decision making.

    An AI deployment has a huge range of components, it would not be unreasonable to describe it as an ecosystem. This ecosystem might contain audio-visual interpretation functions, multisystem and/or multichannel integration, and human-computer interface components. However, none of those would mean anything without the analytical brain at the centre. Without that, the rest of the ecosystem is simply a lifeless body. It needs the analytics component to provide direction and interpretation of the world around it.

    Author: Yigit Karabag

    Source: SAS

  • Investing In Artificial Intelligence

    shutterstock Artificial intelligence is one of the most exciting and transformative opportunities of our time. From my vantage point as a venture investor at Playfair Capital, where I focus on investing and building community around AI, I see this as a great time for investors to help build companies in this space. There are three key reasons.

    First, with 40 percent of the world’s population now online, and more than 2 billion smartphones being used with increasing addiction every day (KPCB), we’re creating data assets, the raw material for AI, that describe our behaviors, interests, knowledge, connections and activities at a level of granularity that has never existed.

    Second, the costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing, making AI applications possible and affordable.

    Third, we’ve seen significant improvements recently in the design of learning systems, architectures and software infrastructure that, together, promise to further accelerate the speed of innovation. Indeed, we don’t fully appreciate what tomorrow will look and feel like.

    We also must realize that AI-driven products are already out in the wild, improving the performance of search engines, recommender systems (e.g., e-commerce, music), ad serving and financial trading (amongst others).

    Companies with the resources to invest in AI are already creating an impetus for others to follow suit — or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.

    How Might You Apply AI Technologies?

    With such a powerful and generally applicable technology, AI companies can enter the market in different ways. Here are six to consider, along with example businesses that have chosen these routes:

    • There are vast amounts of enterprise and open data available in various data silos, whether web or on-premise. Making connections between these enables a holistic view of a complex problem, from which new insights can be identified and used to make predictions (e.g., DueDil*, Premise and Enigma).
    • Leverage the domain expertise of your team and address a focused, high-value, recurring problem using a set of AI techniques that extend the shortfalls of humans (e.g., Sift Science or Ravelin* for online fraud detection).
    • Productize existing or new AI frameworks for feature engineering, hyperparameter optimization, data processing, algorithms, model training and deployment (amongst others) for a wide variety of commercial problems (e.g., H2O.ai, Seldon* and SigOpt).
    • Automate the repetitive, structured, error-prone and slow processes conducted by knowledge workers on a daily basis using contextual decision making (e.g., Gluru, x.ai and SwiftKey).
    • Endow robots and autonomous agents with the ability to sense, learn and make decisions within a physical environment (e.g., Tesla, Matternet and SkyCatch).
    • Take the long view and focus on research and development (R&D) to take risks that would otherwise be relegated to academia — but due to strict budgets, often isn’t anymore (e.g., DNN Research, DeepMind and Vicarious).

    There’s more on this discussion here. A key consideration, however, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productizing technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are proprietary data access/creation, experienced talent and addictive products.

    Which Challenges Are Faced By Operators And Closely Considered By Investors?

    I see a range of operational, commercial and financial challenges that operators and investors closely consider when working in the AI space. Here are the main points to keep top of mind:


    • How to balance the longer-term R&D route with monetization in the short term? While more libraries and frameworks are being released, there’s still significant upfront investment to be made before product performance is acceptable. Users will often be benchmarking against a result produced by a human, so that’s what you’re competing against.
    • The talent pool is shallow: few have the right blend of skills and experience. How will you source and retain talent?
    • Think about balancing engineering with product research and design early on. Working on aesthetics and experience as an afterthought is tantamount to slapping lipstick onto a pig. It’ll still be a pig.
    • Most AI systems need data to be useful. How do you bootstrap your system w/o much data in the early days?


    • AI products are still relatively new in the market. As such, buyers are likely to be non-technical (or not have enough domain knowledge to understand the guts of what you do). They might also be new buyers of the product you sell. Hence, you must closely appreciate the steps/hurdles in the sales cycle.
    • How to deliver the product? SaaS, API, open source?
    • Include chargeable consulting, set up, or support services?
    • Will you be able to use high-level learnings from client data for others?


    • Which type of investors are in the best position to appraise your business?
    • What progress is deemed investable? MVP, publications, open source community of users or recurring revenue?
    • Should you focus on core product development or work closely on bespoke projects with clients along the way?
    • Consider buffers when raising capital to ensure that you’re not going out to market again before you’ve reached a significant milestone. 

    Build With The User In The Loop

    There are two big factors that make involving the user in an AI-driven product paramount. One, machines don’t yet recapitulate human cognition. To pick up where software falls short, we need to call on the user for help. And two, buyers/users of software products have more choice today than ever. As such, they’re often fickle (the average 90-day retention for apps is 35 percent).

    Returning expected value out of the box is key to building habits (hyperparameter optimization can help). Here are some great examples of products that prove that involving the user in the loop improves performance:

    • Search: Google uses autocomplete as a way of understanding and disambiguating language/query intent.
    • Vision: Google Translate or Mapillary traffic sign detection enable the user to correct results.
    • Translation: Unbabel community translators perfect machine transcripts.
    • Email Spam Filters: Google, again, to the rescue.

    We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer-term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.

    What’s The AI Investment Climate Like These Days?

    To put this discussion into context, let’s first look at the global VC market: Q1-Q3 2015 saw $47.2 billion invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA).

    We’re likely to breach $55 billion by year’s end. There are roughly 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well-respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.

    So far, we’ve seen about 300 deals into AI companies (defined as businesses whose description includes such keywords as artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning) from January 1, 2015 through December 1, 2015 (CB Insights).

    In the U.K., companies like Ravelin*, Signal and Gluru* raised seed rounds. approximately $2 billion was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providers Avant ($339 million debt+credit), ZestFinance ($150 million debt), LiftForward ($250 million credit) and Argon Credit ($75 million credit). Importantly, 80 percent of deals were < $5 million in size, and 90 percent of the cash was invested into U.S. companies versus 13 percent in Europe. Seventy-five percent of rounds were in the U.S.

     The exit market has seen 33 M&A transactions and 1 IPO. Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532 million; $17 million raised), Elastica/Blue Coat Systems ($280 million; $45 million raised) and SupersonicAds/IronSource ($150 million; $21 million raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7 people.

    Altogether, AI investments will have accounted for roughly 5 percent of total VC investments for 2015. That’s higher than the 2 percent claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software.

    The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the U.S. Businesses must therefore have exposure to this market.

    Which Problems Remain To Be Solved?


    I spent a number of summers in university and three years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is very challenging, expensive, lengthy and regulated, and ultimately offers a transient solution to treating disease.

    Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real time, driving down cost of care over a patient’s lifetime while consequently improving outcomes.

    Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online, and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by third-parties). Sure, the news might paint a different story, but the fact is that we’re still using the web and its wealth of products.

    On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge.

    Look at today’s clinical model. A patient presents into the hospital when they feel something is wrong. The doctor must conduct a battery of tests to derive a diagnosis. These tests address a single (often late-stage) time point, at which moment little can be done to reverse damage (e.g., in the case of cancer).

    Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There are loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions...

    Some companies are already hacking away at this problem:

    • Sano: Continuously monitor biomarkers in blood using sensors and software.
    • Enlitic/MetaMind/Zebra Medical: Vision systems for decision support (MRI/CT).
    • Deep Genomics/Atomwise: Learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
    • Flatiron Health: Common technology infrastructure for clinics and hospitals to process oncology data generated from research.
    • Google: Filed a patent covering an invention for drawing blood without a needle. This is a small step toward wearable sampling devices.
    • A point worth noting is that the U.K. has a slight leg up on the data access front. Initiatives like the U.K. Biobank (500,000 patient records), Genomics England (100,000 genomes sequenced), HipSci (stem cells) and the NHS care.data program are leading the way in creating centralized data repositories for public health and therapeutic research.

    Enterprise Automation

    Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9 trillion by 2020 (BAML). Coupled with the efficiency gains worth $1.9 trillion driven by robots, I reckon there’s a chance for near-complete automation of core, repetitive businesses functions in the future.

    Think of all the productized SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making.

    Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfillment and shipping. Of course, this is probably a ways off.

    I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long-term innovation, especially considering that far less is occurring within universities. VC was born to fund moonshots.

    We must remember that access to technology will, over time, become commoditized. It’s therefore key to understand your use case, your user, the value you bring and how it’s experienced and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering.

    Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g., the user experience layer ). As such, there’s renewed focus on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.

    Finally, you must have exposure to the U.S. market, where the lion’s share of value is created and realized. We have an opportunity to catalyze the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond.

    Source: TechCrunch

  • Is AI a threat or an opportunity to data engineers?

    Is AI a threat or an opportunity to data engineers?

    Humans losing jobs to robots has been the preoccupation of economists and sci-fi writers alike for almost 100 years. AI systems are the next perceived threat to human jobs, but which jobs? Sourcing the logic from numerous open source packages or paid API services, connecting disparate datasets, and maintaining a pipeline are complex tasks that AIs are ill-suited to do at present. 

    AI and the data pipeline

    A well set up data pipeline is a thing of beauty, seamlessly connecting multiple datasets to a business intelligence tool to allow clients, internal teams, and other stakeholders to perform complex analysis and get the most out of their data. 

    Data engineers thrive on interesting challenges: bringing terabytes of data from wherever it lives to where it can be analyzed, transforming it using various libraries and services, and keeping the pipeline stable. However, the data preparation phase of the whole process poses its own issues. It can be a creative process, and it’s certainly necessary, but saving and automating the repetitive usage of the logic every X amount of hours is a challenge. Today, the way to solve this challenge is by bringing in artificial intelligence and machine learning.

    Augmented analytics is the next iteration of business intelligence, where AI elements are incorporated into every phase of the BI process. The powerful AI (artificial intelligence) analytics systems that are emerging today have AI assisting users in a broad range of ways, but we’ll stay focused on data prep for this article. 

    Three sections of the data preparation process where AI can help that we’ll discuss are data cleaning and transformation, extracting and loading, and verifying the prepared data. 

    Clean as you go

    The saying 'data is the new oil' gets tossed around enough to have already become a cliche, but for purposes of our discussion it’s an especially apt metaphor. Most companies are sitting on huge stores of data, but in its unprocessed form, it’s not very useful. Even worse, analyzing non-normalized data boils down to potentially harmful and misleading results. To continue with the oil metaphor, you need a stable and reliable pipeline to take your data from where it’s stored to where it’ll be processed so that its true value can be harnessed.

    While you’re moving that data, data engineers have the ability to digest it so that it’s closer to being in a usable state by the time it hits the BI system. BI platforms are already using AI to help with the data cleansing process in a variety of ways. Let’s walk through how AI can assist you:

    1. AI assistance can recommend a date model structure, including which columns to join, which to compound, and maybe even create dimension tables to facilitate the fact table joins.
    2. AI systems can apply simple rulesets to help standardize the data by doing things like making all text lowercase and removing blank spaces before and after values. 
    3. If you already have a perfectly formatted dataset to use as a learning dataset, AI assistance can even be trained on this to recognize how the larger dataset should look, allowing it to take a holistic approach to cleansing, rather than you telling it specific tasks to do. 
    4. As AI assistance learns how you want your data to look, the system can even scan all the columns and make recommendations as to what to fix, implement active learning, or go ahead and fix errors on its own, such as removing redundant records (deduplication caused by misspelling, for example) or using context clues to fill in missing values. 

    Extracting and loading

    The rise of cloud data warehouses has changed the way companies treat their data. In the past, well-organized databases were needed to keep records in order. Today, data comes from a wide array of different sources and in a variety of different forms, from user-generated to sensory data. More and more frequently we even witness companies using third party data to enrich their business logic (how the weather forecast will affect my sales?). 

    This change coincided with an increase in the sophistication of AI data analytics systems, allowing them to deal with data in all its types, structured (numerical) and unstructured (text, image, video). Data storage on cloud warehouses like Redshift is so cheap and there can often be different roles responsible for data gathering and storage, so rather than worry about how everything is formatted, companies just pump everything into the warehouse, however it’s formatted, and deal with it later.

    This is another place where BI with AI has a chance to shine, extracting the data, performing transformations on it, then loading it into the BI tool. The same AI abilities mentioned before can be applied in this way to end up with usable data at the endpoint: removing duplicate records, filling blank values, and suggesting other cleansing and transformation actions, such as clustering and segmentation, based on the learning dataset. However your data is stored, the right AI analytics tool can help get it into better shape for when you create your single source of truth; it can also help as you load your data into your BI platform or data science tool.

    While you’re moving your data into your BI system, the big chance for an AI assist is in monitoring the process. If a load fails, exceeds the normal time threshold or the forecasted one, the AI can learn that and ping the engineer to let them know there’s a problem. A sudden change in the volume of data being loaded could also be worth a mention, so that the engineer can look into it and see if there’s a larger problem. 

    The bottom line is that a strong AI analytics system can be a second set of eyes for a busy data engineering team, freeing them to focus on the challenges that drive more value to the analytics team, and ultimately the business.

    Outliers, efficiency, and verifying results

    Outlier detection is one task that an AI system can be designed to handle that would have huge benefits for data engineers dealing with large volumes of not-quite-perfect data. The AI would monitor tables as they get created and new data gets loaded, and check the outputs. As the system scans the values within a column, it could test for things like uniqueness, referential integrity (to values that are keys in other tables), skewed distribution, null values, and accepted values. It would basically be checking the whole table and saying 'does this column look correct'? based on a series of rules that could be applied to it. If the AI believes that one of the rules could apply, and that the columns values do not meet the rule’s conditions, then it would send an alert to the engineers.

    Trusting your data without checking your work is a recipe for disaster. Having a few questions you already know ballpark answers to can be a great way to test your AI-prepped data in the aftermath. If your answers come back within acceptable limits, then you know the prep process was (acceptably) successful. If there are major discrepancies, you may have to retrain the system or adjust the strictness/laxness of the settings you’re using.

    Some other tasks a BI system with AI can assist with include showing you which joins are occurring most frequently across your model and suggesting pre-aggregation. This could prove useful for data analysts to know and help them with speedier queries down the road. AI could also scan columns and test for uniqueness. For example, if every value needs to be unique, like an ID column for all your Salesforce accounts, and there are two different users with the same account ID, then the AI could call that out. For purely numerical data, AI could identify outliers that might indicate improperly entered data. Either way, the AI is once again an extra set of eyes, performing detailed, routine work, at scale, and surfacing the results to human data engineers only when necessary. 

    Is AI taking engineering jobs?

    Although humans losing jobs to robots is a nice story, in reality, it is far from the truth for data engineers. Tackling routine tasks like eliminating redundant data, filling in gaps in datasets, and pinging human engineers when anomalies arise are all places where AI analytics systems can really add value, doing the heavy lifting that humans don’t really want to do anyway, and augment hard-working data engineers to tackle the challenging problems that will lead to bigger rewards for the company down the line.

    Author: Inna Tokarev Sela

    Source: Sisense

  • Is Artificial Intelligence shaping the future of Market Intelligence?

    Is Artificial Intelligence shaping the future of market intelligence?

    Global developments, increasing competition, shifting consumer demands... These are only a few of the countless external forces that will shape the exciting world of tomorrow.

    As a company, how can you be prepared for rapid changes in the environment?

    That's where market intelligence proves its value.

    Companies require proactive and forward-thinking market intelligence in order to detect and react to critical market signals. This kind of intelligence is critical to guarantee sustainable profits and ensure survival in today’s highly competitive environment.

    The market intelligence field over the years

    Just like the world itself, the market intelligence field has seen some major changes over the past couple of decades. For example, the rise and popularity of social media has made it notably easier to track data about consumers and competitors. It is widely accepted that this field will undergo changes at an even higher pace in the future, due to significant technical, social and organizational developments.

    But what are the developments and trends that will impact market intelligence most over the next few years? According to the research paper State of the Art and Trends of Market Intelligence, the most impactful developments are Artificial Intelligence, Data Visualization, and the GDPR legislation. The focus of this article is on the role of Artificial Intelligence (AI).

    Artificial Intelligence

    Artificial Intelligence is the intelligence displayed by machines, often characterized by learning and the ability to adapt to changes. According to Qualtrics, 93% of market researchers see AI as an opportunity for the research business.

    Where can AI add value?

    AI can add value in the processing of large and unstructured datasets. Open-ended data can be processed with ease due to the use of AI technologies such as Natural Language Processing (NLP), for example. NLP enables computers to understand, interpret and manipulate natural human language. This way NLP can assist in tracking sentiments from different sentences. This can be applied in business in for example the assessment of reviews, which usually is a slow task. With NLP however, this process can be streamlined efficiently.

    NLP can also be used as an add-on for language translation programs. It allows for the rudimentary translation of a text, before it is translated by a human. This method also makes it possible to quickly translate reports and documents written in another language, which can be very beneficial during the collection of raw data.

    Additionally, NLP can assist with practices like Topic Modeling, which consists of the automatic generation of keywords for different articles and blogs. This tool makes the process of building a huge set of labeled data more convenient. Another method, which also utilizes NLP, is Text Classification: an algorithm that can automatically suggest a related category for a specific article or news item.

    Desk research is extremely valuable in the process of gathering relevant market intelligence. However, it is very time-consuming. This is problematic, because important insights may not arrive at the desk of the specific decision maker in time. This can be detrimental to a company’s ability to react quickly in a fast-changing business environment. AI can speed up this process, as it can rapidly read all kind of sources and identify trends significantly faster than traditional desk research ever could.

    The future of market intelligence

    Clearly, the applications mentioned in this article are just a selection of the wide range of possibilities AI is providing within the field of market research and intelligence. The popularity of this technology is increasing rapidly, and it can unlock stunning amounts of relevant and rich information for all kind of fields.

    Does this imply that traditional methods and analysis are redundant and not needed anymore?

    Of course not! AI also has its own limitations.

    In the next few years, the true value of AI and other technological developments will be shown. The real power lies in the combination of AI with more traditional research methods. The results will allow businesses to arrive at actionable insights faster, and in turn, improve solid and data-driven decision-making. This way market intelligence can help companies take the steps that lead to tomorrow’s success.

    Author: Kees Kuiper

    Source: Hammer Intel

  • Is de multicloud een reden voor de langzame adoptie van AI bij organisaties?

    Is de multicloud een reden voor de langzame adoptie van AI bij organisaties?

    Ondanks de enorme potentie verloopt de adoptie van AI relatief langzaam. Volgens Efrym Willems, Business Development IBM Watson, Analytics, IoT & IBM Cloud bij Tech Data, is de multicloud een veelgehoorde reden om de technologie links te laten liggen. Volgens hem onterecht. 'AI is inmiddels ook in multicloudomgevingen een realistische optie'.

    Begin 2019 kondigde low-codeontwikkelaar Mendix een verregaande integratie aan van het eigen platform met IBM Cloud Services. Applicatieontwikkelaars krijgen daarmee eenvoudige toegang tot de functies van het artificial intelligence (AI)-platform IBM Watson. Bovendien draaien applicaties ontwikkeld met Mendix direct in de IBM Cloud. Dat lijkt op het eerste oog een detail, maar is een belangrijke stap voor de bredere adoptie van AI in multicloudomgevingen. Dat is een welkome ontwikkeling. Volgens analisten en AI-leveranciers blijft de adoptie van AI achter. De multicloud staat daarbij in de weg. Organisaties weten niet hoe ze het versnipperde datalandschap bijeen kunnen brengen. 'Toch bewijst IBM dat multicloud helemaal geen drempel hoeft te zijn', aldus Willems.

    AI maakt samenvatting en stelt diagnose

    Goed nieuws, want AI is bewezen effectief. Een recent voorbeeld is de samenvatting van de Wimbledon-finale, die geheel werd samengesteld door het AI-systeem IBM Watson. 'Het gevecht tussen de tennislegendes Roger Federer en Novak Djokovic tijdens de finale van Wimbledon 2019 duurde bijna vijf uur. Toch stond twee minuten na de wedstrijd een samenvatting paraat. Het systeem selecteerde daarbij de hoogtepunten op basis van geluid en gezichtsuitdrukkingen van het publiek. Twintig minuten na de finale waren zelfs gepersonaliseerde samenvattingen beschikbaar', vertelt de Tech Data-expert. AI heeft inmiddels ook zijn waarde bewezen als het gaat om bijvoorbeeld het optimaliseren van productieprocessen of het verbeteren van de gezondheidszorg. Zo experimenteren Nederlandse ziekenhuizen volop met de toepassing van AI, onder andere op het gebied van diagnostiek. 'Watson stelde binnen 10 minuten de juiste diagnose bij een vrouw, een zeldzame vorm van leukemie'.


    Toch zijn organisaties niet massaal op de AI-trein gesprongen. Volgens onderzoek heeft slechts een kwart van de organisaties een bedrijfsbrede AI-strategie. Vaak gooien zorgen rondom data-integratie roet in het eten. 'De gestructureerde en ongestructureerde data die nodig zijn voor analyses, staan vaak verspreid over meerdere locaties, zowel in de cloud als on-premises', legt Willems uit. 'Maar dat hoeft inmiddels geen probleem meer te zijn. In de multicloud is een effectieve inzet van AI goed mogelijk', aldus Willems. Wel zijn een aantal voorwaarden belangrijk:

    1. AI op alle platforms

    Een goede werking van AI vraagt aanwezigheid van de technologie op alle platforms waar de data en applicaties in gebruik zijn. 'Dat is precies de reden dat IBM zijn Watson-oplossing beschikbaar heeft gemaakt voor diverse platformen, via microservices', legt Willems uit. “Deze draaien in een Kubernetes-container. 'Die microservices draaien on-premises of in de IBM Cloud, maar functioneren ook prima in de clouds van bijvoorbeeld Microsoft, Amazon en Google. De AI komt dus naar de data, in plaats van dat alle data naar de AI moeten komen. Deze aanpak biedt bovendien een ander voordeel: het voorkomt dat organisaties vastzitten aan een specifieke omgeving'.

    2. Dataconnectoren

    Bovenstaand gegeven is niet voor iedere organisatie voldoende. Data kunnen nog verder versnipperd zijn, bijvoorbeeld in omgevingen als Dropbox, Salesforce, Tableau en Looker. In die gevallen is het belangrijk dat voor deze omgevingen dataconnectoren beschikbaar zijn. Zo kan de AI-oplossing alsnog gebruikmaken van de daar opgeslagen gegevens. IBM heeft daarnaast Watson Studio, het platform voor datascience en machine learning, vorig jaar verrijkt met een verbeterde integratie met Hadoop Distributions (CDH en HDP). Volgens Willems is het daardoor eveneens mogelijk om analytics uit te voeren daar waar de data staan en gebruik te maken van de beschikbare rekenkracht.

    3. Alternatief: data naar één plek

    Een alternatief is het samenbrengen van datasets naar een centraal platform. 'IBM Cloud, wat sinds 2018 de nieuwe naam is voor SoftLayer, biedt die mogelijkheid. Bijvoorbeeld met IaaS- of PaaS-diensten, of door simpelweg cloudstorage te bieden'. Het is daarnaast mogelijk IaaS- en PaaS-diensten te integreren in een multicloudomgeving, voegt Willems eraan toe.

    4. Brede ondersteuning ontwikkeltools

    In het hierboven geschetste scenario is de integratie van Mendix met IBM Cloud een belangrijke ontwikkeling voor AI-adoptie. 'Na consolidatie van de data kunnen speciaal daarvoor gebouwde apps de data ontsluiten en analyseren', zegt Willems. 'Het ontwikkelen van die apps gaat snel en relatief eenvoudig met low-code en no-code platformen van aanbieders als Mendix of OutSystems'. Daarnaast is uiteraard ook IBM Bluemix, de developertoolset van IBM, inmiddels onder de vlag van IBM Cloud beschikbaar.

    Geen obstakel

    AI kan met bovengenoemde aandachtspunten onafhankelijk van de gekozen clouddeployment waarde toevoegen. 'Of een organisatie nu de AI naar de data brengt of andersom: een multicloudomgeving is in beide gevallen geen obstakel meer', besluit Willems.

    Bron: BI-Platform

  • Is digitalization really capable of making your business paperless?

    Is digitalization really capable of making your business paperless?

    In a recent survey, 79% of content management professionals admitted that more than a quarter of their organizations’ total records are still on paper documents. The same research also showed that 93% of respondents believe extracting smart data from paper records would revolutionize the value of that data for the business, with 70% saying the way to achieve this is with digitalized records.

    It’s not much of a leap to imagine similar problems 40 years ago, when the phrase ‘paperless office’ was just coming into common use. Decades later, our collective reliance on paper remains a source of frustration and inefficiency. As the research revealed, close to one-third of the survey respondents said it is difficult to access data from their paper records. 60% say that these records slow down business processes while 93% agree that if all their paper records were lost, it would negatively impact their organization.  

    Furthermore, almost every content management professional surveyed at the event agreed that the ability to easily extract data from paper records would add value to their businesses. When specifically asked if digitizing would help, 70% said 'yes'.

    Thankfully, times are rapidly changing, and the arrival of digitalization solutions that blend advanced robotics with artificial intelligence are enabling organizations to not only address the inefficiency of manual paper processing, but to extract new value from the data they create.

    A prominent approach to the digitalization challenge has seen it build and patent robots that can sort and scan paper and make the resulting data available for use. The technology automates 80% of the conversion process, which includes paper handling, fastener removing, and digital imaging. Specifically, robots are more efficient at removing staples and other fasteners, which minimizes risk of paper jams and tears. In contrast, manual cataloguing and indexing is error-prone.

    As a result, this kind of technology can process an entire banker box within one to two hours; it would take a human about four to six hours to digitalize the same content. When implemented at scale, this equates to processing thousands of boxes per day.

    Using robots to digitalize paper is also inherently more secure. Robots do 90% of the paper handling and 90% of the indexing, meaning organizations don’t need to worry about the wrong people reading the most sensitive documents.

    The real thing: How Coca-Cola Bottlers’ Sales and Services digitalized its supply chain

    Real-world examples illustrate the impact. For example, optimizing the supply chain through digitalization can improve both efficiency and competitiveness, compared to organizations that rely on traditional methods of administering increasingly complex and dynamic supply chains that can leave them overwhelmed.

    In particular, supply chain operations that rely heavily on paper and physical records for management face financial consequences when their systems fail to keep pace with fast-changing circumstances and demands.

    The Coca-Cola Company (TCCC) and its bottlers, Coca-Cola Bottlers’ Sales and Services (CCBSS), are multinational organizations with complex logistics made up of suppliers and channels.

    They have a unique approach to scaling their business by franchising the manufacturing, packaging, and logistics to the CCBSS 'bottlers' that fulfil the orders fulfilment of Coca-Cola customers. While TCCC outsources all of its manufacturing and logistics management of its products to CCBSS, it owns the contracts with customers like Costco, Walmart, Kroger, etc.  

    The existing process required CCBSS to provide evidence of the fulfilment of goods and services each month. This is achieved via a proof of delivery (PoD) ticket between the deliverers and receivers. The burden of proof falls to the bottlers; if the document isn’t provided, the customer won’t pay the bill.

    The pre-digital procedure was to photocopy the PoD at the bottlers’ office and then FedEx the original to a supply chain management company. However, because CCBSS contracted seven different companies to support the ecosystem that each single document was sifted through, it was suffering from an overly complex and inefficient system. The result was a process that was losing millions of these documents a year, with a significant impact on revenue.

    Recently, CCBSS decided to digitalize parts of its operation including supply chain management in order to maximize accuracy and efficiency. Through digitalization, CCBSS estimates that it will soon save millions of dollars by consolidating vendors through document lifecycle management. The document management, image storage, indexing, and physical storage and shredding suppliers have been replaced with the Ripcord solution and turned what was once a bird’s nest of process outsourcing into a streamlined system with advanced technology that can index tens of millions of documents and do it with far greater levels of quality, accuracy, and speed.

    Author: Alex Fielding

    Source: Dataversity

  • Less is More: Confusion in AI Describing Terms

    Less is More: Confusion in AI Describing Terms

    An overview of the competing definitions for addressing AI’s challenges

    There are a lot of people out there working to make artificial intelligence and machine learning suck less. In fact, earlier this year I joined a startup that wants to help people build a deeper connection with artificial intelligence by giving them a more direct way to control what algorithms can do for them. We’re calling it an “Empathetic AI”. As we attempt to give meaning to this new term, I’ve become curious about what other groups are calling their own proposed solutions for algorithms that work for us rather than against us. Here’s an overview of what I found.

    Empathetic AI

    For us at Waverly, empathy refers to giving users control over their algorithms and helping them connect with their aspirations. I found only one other instance of a company using the same term, but in a different way. In 2019, Pega used the term Empathetic AI to sell its Customer Empathy Advisor™ solution, which helps businesses gather customer input before providing a sales offer. This is in contrast to the conventional approach of e-commerce sites that make recommendations based on a user’s behaviour.

    Though both Waverly and Pega view empathy as listening to people rather than proactively recommending results based on large datasets, the key difference in their approaches is who interacts with the AI. At Waverly, we’re creating tools meant to be used by users directly, whereas Pega provides tools for businesses to create and adjust recommendations for users.

    N.B. Empathetic AI shouldn’t be confused with Artificial Empathy (AE), which is a technology designed to detect and respond to human emotions, most commonly used in systems like robots and virtual assistants. There aren’t many practical examples of this today, but some notable attempts are robot pets that have a limited simulated emotional range like PleoAibo, and Cozmo. In software, there are attempts being made to deduce human emotions based on signals like your typing behaviour or tone of voice.

    Responsible AI

    This is the most commonly used term by large organizations that are heavily invested in improving AI technology. AccentureMicrosoftGoogle, and PwC all have some kind of framework or principles for what they define as Responsible AI.

    Here’s an overview of how each of these companies interprets the concept of Responsible AI:

    • Accenture: A framework for building trust in AI solutions. This is intended to help guard against the use of biased data and algorithms, ensure that automated decisions are justified and explainable, and help maintain user trust and individual privacy.
    • Microsoft: Ethical principles that put people first, including fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability.
    • Google: An ethical charter that guides the development and use of artificial intelligence in research and products under the principles of fairness, interpretability, privacy, and security.
    • PwC: A tool kit that addresses five dimensions of responsibility (governance, interpretability & explainability, bias & fairness, robustness & security, ethics & regulation).

    Though it’s hard to extract a concise definition from each company, combining the different terms they use to talk about “responsibility” in AI gives us some insight into what these companies care about — or at least what they consider sellable to their clients.

    AI Fairness

    You might have noticed that fairness comes up repeatedly as a subset of Responsible AI, but IBM has the biggest resource dedicated solely to this concept with their AI Fairness 360 open source toolkit. The definition of fairness generally refers to avoiding unwanted bias in systems and datasets.

    Given the increasing public attention toward systemic problems related to bias and inclusivity, it’s no surprise that fairness is one of the most relevant concepts for creating better AI. Despite the seemingly widespread understanding of the term, there are still much needed conversations happening around the impacts of fairness. A recent article on HBR tried to make the case that fairness is not only ethical; it would also make companies more profitable and productive. To get a better sense of how the tiniest decision about an AI’s programming can cause massive ripples in society, check out Parable of Polygons, a brilliant interactive demo by Nicky Case.

    Trustworthy AI

    In 2018, The EU put together a high-level expert group on AI to provide advice on its AI strategy through four deliverables. In April 2019, the EU published the first deliverable, a set of ethics guidelines for Trustworthy AI, which claims that this technology should be:

    1. Lawful — respecting all applicable laws and regulations
    2. Ethical — respecting ethical principles and values
    3. Robust — both from a technical perspective while taking into account its social environment

    The guidelines are further broken down into 7 key requirements, covering topics like agency, transparency, and privacy, among others.

    Almost exactly a year later, Deloitte released a trademarked Trustworthy AI™ Framework. It’s disappointing that they don’t even allude to the extensive work done by the EU before claiming ownership over the term. And then they repurposed it to create their own six dimensions that look a lot like what everyone else is calling Responsible AI. To them, Trustworthy AI™ is fair and impartial, transparent and explainable, responsible and accountable, robust and reliable, respectful of privacy, safe and secure. The framework even comes complete with a chart that can be easily added to any executive’s PowerPoint presentation.

    Finally, in late 2020, Mozilla released their whitepaper on Trustworthy AI with their own definition.

    Mozilla defines Trustworthy AI as AI that is demonstrably worthy of trust, tech that considers accountability, agency, and individual and collective well-being.

    Though they did acknowledge that it’s an extension of the EU’s work on trustworthiness, the deviation from the EU-established understanding of Trustworthy AI perpetuates the trend of companies not aligning on communication.

    Explainable AI (XAI) and Interpretable AI

    All of these different frameworks and principles won’t mean anything if the technology is ultimately hidden in a black box and impossible to understand. This is why many of the frameworks discussed above refer to explainable and interpretable AI.

    These terms refer to how much an algorithm’s code can be understood and what tools can be used to understand it. They’re often used interchangeably, like on this Wikipedia page where interpretability is listed as a subset of explainability. Others have a different perspective, like the author of this article, who discusses the differences between the two and posits the terms on a spectrum.

    Due to the technical nature of these terms, my understanding of their differences is limited. However, it seems there’s a distinction needed between the term “Explainable AI” (XAI) and “explainable model”. The chart above depicts the different models that algorithms can be based on, whereas the Wikipedia page talks about the broader concept of XAI. At this point, it feels like splitting hairs rather than providing clarification for most people, so I’ll leave this debate to the experts.

    Competing definitions will cost us

    As I take stock of all these terms, I find myself more confused than reassured. The industry is using words that carry quite a bit of heft in everyday language, but redefining them in relatively arbitrary ways in the context of AI. Though there are some concerted efforts to create shared understanding, most notably around the EU guidelines, the scope and focus of each company’s definitions are different enough that it’s likely to cause problems in communication and public understanding.

    As a society, we seem to agree that we need AI systems that work in humanity’s best interest, yet we’ve still found a way to make it a race to see who gets credit for the idea rather than the solution. In fact, an analysis by OpenAI — the AI research and deployment company whose mission it is to ensure that AI benefits all of humanity — shows that competitive pressure could actually push companies to under-invest in safety and cause a collective action problem.

    Though alignment would be ideal, diversity at this early stage is a natural step toward collective understanding. What’s imperative is that we don’t get caught up trying to find terms that make our companies sound good and actually take the necessary steps to create AI systems that provide favourable outcomes for all of us.

    Author: Charlie Gedeon

    Source: Towards Data Science

  • Machine learning, AI, and the increasing attention for data quality

    Machine learning, AI, and the increasing attention for data quality

    Data quality has been going through a renaissance recently.

    As a growing number of organizations increase efforts to transition computing infrastructure to the cloud and invest in cutting-edge machine learning and AI initiatives, they are finding that the main barrier to success is the quality of their data.

    The old saying “garbage in, garbage out” has never been more relevant. With the speed and scale of today’s analytics workloads and the businesses that they support, the costs associated with poor data quality are also higher than ever.

    This is reflected in a massive uptick in media coverage on the topic. Over the past few months, data quality has been the focus of feature articles in The Wall Street Journal, Forbes, Harvard Business Review, MIT Sloan Management Review and others. The common theme is that the success of machine learning and AI is completely dependent on data quality. A quote that summarizes this dependency very well is this one by Thomas Redman: ''If your data is bad, your machine learning tools are useless.''

    The development of new approaches towards data quality

    The need to accelerate data quality assessment, remediation and monitoring has never been more critical for organizations and they are finding that the traditional approaches to data quality don’t provide the speed, scale and agility required by today’s businesses.

    For this reason, highly rated data preparation business Trifacta recently announced an expansion into data quality and unveiled two major new platform capabilities with active profiling and smart cleaning. This is the first time Trifacta has expanded our focus beyond data preparation. By adding new data quality functionality, the business aims to gain capabilities to handle a wider set of data management tasks as part of a modern DataOps platform.

    Legacy approaches to data quality involve many manual, disparate activities as part of a broader process. Dedicated data quality teams, often disconnected from the business context of the data they are working with, manage the process of profiling, fixing and continually monitoring data quality in operational workflows. Each step must be managed in a completely separate interface. It’s hard to iteratively move back-and-forth between steps such as profiling and remediation. Worst of all, the individuals doing the work of managing data quality often don’t have the appropriate context for the data to make informed decisions when business rules change or new situations arise.

    Trifacta uses interactive visualizations and machine intelligence guides help users by highlighting data quality issues and providing intelligent suggestions on how to address them. Profiling, user interaction, intelligent suggestions, and guided decision-making are all interconnected and drive the other. Users can seamlessly transition back-and-forth between steps to ensure their work is correct. This guided approach lowers the barriers to users and helps to democratize the work beyond siloed data quality teams, allowing those with the business context to own and deliver quality outputs with greater efficiency to downstream analytics initiatives.

    New data platform capabilities like this are only a first (albeit significant) step into data quality. Keep your eyes open and expect more developments towards data quality in the near future!

    Author: Will Davis

    Source: Trifacta

  • Machine learning: definition and opportunities

    Machine learning: definition and opportunities

    What is machine learning?

    Machine learning is an application of artificial intelligence (AI) that gives computers the ability to continually learn from data, identify patterns, make decisions and improve from experience in an autonomous fashion over time without being explicitly programmed.

    How big will it be?

    According to the International Data Corporation (IDC), spending on AI and machine learning will grow from $US37.5 billion in 2019 to $US97.9 billion by 2023.

    What’s the opportunity?

    Machine learning is having a big impact on the healthcare industry by using data from wearables and sensors to assess a patient’s health in real-time. Unexpected and unpredictable patterns can be identified from masses of independent variables leading to improved diagnosis, treatment and prevention.

    Machine learning is also being used in the financial services industry as a way of preventing fraud. Systems can analyze millions of bits of data relating to online buyer and seller behavior, which by themselves wouldn’t be conclusive, but together can form a strong indication of fraud.

    Oil and gas is another industry starting to benefit from machine learning technology. For example, miles of pipeline footage shot by drones can be analyzed pixel by pixel, identifying potential structural weaknesses that humans would not be able to see.


    As more and more industries rely on huge volumes of data, and with computational processing becoming cheaper and more powerful, machine learning will lead to rapid and significant improvements to business processes and transform the way companies make decisions . Models can be quickly and automatically produced that can analyze bigger, more complex data sets, uncover previously unknown patterns and identify key insights, opportunities and risks.

    Source: B2B International

  • Machines vormen de toekomst van klantervaringen

    Machines vormen de toekomst van klantervaringen

    Wereldwijd onderzoek door Futurum Research in opdracht van SAS laat zien dat 67% van alle interacties tussen klanten en bedrijven door slimme machines wordt afgehandeld in 2030. Uit het onderzoek blijkt dat technologie de grootste drijvende kracht zal zijn voor de customer experience. Dat betekent dat merken hun klant ecosysteem opnieuw onder de loep moeten nemen om in te kunnen spelen op de mondige consument en nieuwe technologieën.

    Technologie heeft de afgelopen jaren de manier waarop bedrijven en consumenten met elkaar omgaan volledig op zijn kop gezet. Het gedrag en de voorkeuren van consumenten veranderen voortdurend. Hoe zal de klantervaring er in 2030 uitzien? En wat moeten merken doen om tegemoet te komen aan de verwachtingen van de toekomstige consument? Dit zijn een paar van de vragen die aan de orde komen in het onderzoek 'Experience 2030: The Future of Customer Experience', uitgevoerd door Futurum Research en gesponsord door SAS.

    Flexibiliteit en ingrijpende automatisering

    De bedrijven die in dit onderzoek werden ondervraagd voorzien voor 2030 een grootschalige verschuiving naar automatisering van klantinteracties. Het onderzoek voorspelt dat slimme machines mensen zullen vervangen en grofweg twee derde van alle klantcommunicatie, beslissingen tijdens real-time interacties en beslissingen ten aanzien van marketingcampagnes, zullen afhandelen. Volgens het onderzoek zal in 2030 67% van de interacties tussen bedrijven en consumenten die gebruikmaken van digitale technologie (online, mobiel, assistent etc) worden afgehandeld door slimme machines in plaats van menselijke medewerkers. En in 2030 zal 69% van de beslissingen tijdens een klantinteractie worden genomen door slimme machines.

    Consumenten omarmen nieuwe technologie

    Volgens het onderzoek is 78% van alle bedrijven van mening dat consumenten zich momenteel ongemakkelijk voelen over de omgang met technologie in winkels. Uit het onderzoek blijkt echter dat dit slechts gold voor 35% van alle consumenten. Dit verschil in perceptie tussen bedrijven en consumenten kan uitgroeien tot een beperkende factor voor de groei van deze bedrijven.
    Voor bedrijven levert dit niveau van klantacceptatie en -verwachting nieuwe kansen op om de betrokkenheid van klanten te vergroten. Om tegemoet te komen aan de steeds hogere verwachtingen van beide partijen hebben bedrijven echter nieuwe oplossingen nodig om de kloof tussen consumententechnologie en marketing technologie te dichten.

    Investeringen in AI en AR/VR

    De toekomst van customer experience zal in hoge mate worden bepaald door nieuwe technologieën. In dit onderzoek werd bedrijven gevraagd naar de toekomstige technologieën waarin ze momenteel investeren om ondersteuning te bieden aan nieuwe klantervaringen en het verbeteren van de klantentevredenheid in 2030. Volgens het onderzoek investeert 62% van alle bedrijven in spraakgestuurde AI-assistenten voor het verbeteren van de klantinteractie en voor de customer support. Nog eens 58% investeert in spraakgestuurde AI als hulpmiddel voor de marketing- en salesafdeling. 54% van alle bedrijven investeert in augmented reality (AR) en virtual reality (VR) om consumenten te helpen de vorm of het gebruik van een product of dienst op afstand te visualiseren. 53% heeft plannen om AR-/VR-tools in te zetten voor het optimaliseren van het gebruik van producten en selfservice mogelijkheden voor consumenten. Al deze opkomende en complexere technologieën voor klantloyaliteit betekenen dat merken hun kennis op het gebied van datamanagement, analytische optimalisatie processen en geautomatiseerde besluitvormings-mogelijkheden moeten heroverwegen. Ze moeten in staat zijn om deze nieuwe technologie in te zetten ten behoeve van tastbare bedrijfsresultaten. Deze nieuwe toepassingen zullen in staat zijn om data te verzamelen, verwerken en analyseren om bij te dragen aan de ‘multimediamarketing’ die bepalend is voor het toekomstig succes.

    Sleutel tot succes

    Misschien wel de grootste uitdaging voor bedrijven op dit moment is het vermogen om de vertrouwenskloof tussen bedrijven en consumenten te dichten. Consumenten zijn terughoudend in de manier waarop bedrijven met hun persoonlijke gegevens omgaan en voelen zich niet bij machte om hier verandering in te brengen. Slechts 54% van alle consumenten is er gerust op dat bedrijven hun data op vertrouwelijke wijze zullen behandelen. Slechts 54% van de consumenten vertrouwt erop dat bedrijven hun gegevens geheimhouden. Dit is een uitdaging voor bedrijven bij het optimaliseren van de customer experience, om de juiste balans te vinden tussen de hoeveelheid informatie die ze opvragen en het vertrouwen van klanten. Uit de onderzoeksresultaten blijkt echter dat bedrijven zich wel degelijk bewust zijn van de risico’s die ze lopen. 59% van hen is het sterk eens met de stelling dat het beveiligen van klantgegevens de allerbelangrijkste factor is voor een goede klantervaring. De vraag is echter of bedrijven daar klaar voor zijn. Het onderzoek doet vermoeden dat zij hierbij de nodige problemen ondervinden. 84% van alle bedrijven maakt zich namelijk zorgen over wijzigingen in overheidsrichtlijnen ten aanzien van privacy en de mate waarin ze daaraan kunnen voldoen.

    Over de onderzoeksmethodiek

    Futurum Research heeft in mei 2019 ruim 4.000 respondenten in 36 landen ondervraagd in verschillende sectoren en overheidsinstellingen. De onderzoeksresultaten zijn bekend gemaakt tijdens Analytics Experience in Milaan. Hier komen meer dan 1800 data scientists en zakelijke professionals samen om te leren over nieuwe toepassingen en best practices op het gebied van analytics.  

    Bron: BI platform

  • Making AI actionable within your organization

    Making AI actionable within your organization

    It can be really frustrating to run a successful pilot or implement an AI system without it getting widespread adoption through your organization. Operationalizing AI is a really common problem. It may seem that everyone else is using AI to make a huge difference in their business while you’re struggling to figure out how to operationalize the results you’ve gotten from trying a few AI systems.

    There has been so much advancement in AI, so how can you make this great technology actually translate into actionable business results?

    This is a real common problem that has been touching enterprises of all kind, from the biggest companies to mid-sized businesses.

    Here are a few quick pointers on how to turn your explorations in AI into AI practices leading to real results from investments.

    Pragmatic AI

    Firstly, focus on what gets called 'Pragmatic AI', practical AI that has obvious business applications. It’s going to be a long time before we have 'strong AI', so look for solutions that were made by examining problems that businesses deal with every day and then decide to use artificial intelligence to solve the problem. It’s great that your probabilistic Bayesian system is thinking of the world differently or that a company feels like they’ve taken a shortcut around some of the things that make Deep Learning systems slow to train, but what does that mean for the end user of the artificial intelligence? When you’re looking for a practical solution, look for companies who are always trying to improve their user experience and where a PhD in machine learning isn’t needed to write the code.

    Internal valuations

    Similarly, change the way you are considering bringing an AI solution into your company. AI works best when the company isn’t trying to do a science fair project. It works best when it is trying to solve a real business problem. Before evaluating vendors in any particular AI solution or going out to see how RPA solutions really work, talk to users around your business. Listen to the problems they have and think about what kind of solutions would make a huge difference. By making sure that the first AI solution you bring into your organization aligns to business goals, you are much more likely to succeed in getting widespread adoption and a green light to try additional new technologies when it comes time to review budgets.

    And no matter how technology-forward your organization is, AI adoption works best when everyone can understand the results. Pick a KPI focused problem like conversion, customer service, or NPS where the results can be understood without thinking about technology. This helps get others outside of the science project mentality to open their minds on how AI can be used throughout the business.

    Finally, don’t forget that AI can help in a wide variety of ways. Automation is a great place to use AI within an organization, but remember that in many use cases, humans and computers do more together than separately and great uses for AI technology help your company’s employees do their job better or focus on the right pieces of data. These solutions often provide as much value as pure automation!

    Source: Insidebigdata

  • MicroStrategy: Take your business to the next level with machine learning

    MicroStrategy: Take your business to the next level with machine learning

    It’s been nearly 22 years since history was made across a chess board. The place was New York City, and the event was Game 6 of a series of games between IBM’s “Deep Blue” and the renowned world champion Garry Kasparov. It was the first time ever a computer had defeated a player of that caliber in a multi-game scenario, and it kicked off a wave of innovation that’s been methodically working its way into the modern enterprise.

    Deep Blue was a formidable opponent because of its brute force approach to chess. In a game where luck is entirely removed from the equation, it could run a search algorithm on a massive scale to evaluate move, discarding candidate moves once they proved to be less valuable than a previously examined and still available option. This giant decision tree powered the computer to a winning position in just 19 moves with Kasparov resigning.

    As impressive as Deep Blue was back then, present-day computing capabilities are much stronger, by orders of magnitude, inspired by the neural network of the human brain. Data scientists create inputs and define outputs detect previously indecipherable patterns, important variables that influence games, and ultimately, the next move to take.

    Models can also continue to ‘learn’ from playing different scenarios and then update the model through a process called ‘reinforcement learning’ (as the Go-playing AlphaZero program does). The result of this? The ability to process millions of scenarios in a fraction of a second to determine the best possible action, with implications far beyond the gameboard.

    Integrating machine learning models into your business workflows comes with its challenges: business analysts are typically unfamiliar with machine learning methods and/or lack the coding skills necessary to create viable models; integration issues with third-party BI software may be a nonstarter; and the need for governed data to avoid incorrectly trained models is a barrier to success.

    As a possible solution, one could use MicroStrategy as a unified platform for creating and deploying data science and machine learning models. With APIs and connectors to hundreds of data sources, analysts and data scientists can pull in trusted data. And when using the R integration pack, business analysts can produce predictive analytics without coding knowledge and disseminate those results throughout their organization.

    The use cases are already coming in as industry leaders put this technology to work. As one example, a large governmental organization reduced employee attrition by 10% using machine learning, R, and MicroStrategy.

    Author: Neil Routman

    Source: MicroStrategy

  • Organizing Big Data by means of using AI

    Artificial IntelligenceNo matter what your professional goals are, the road to success is paved with small gestures. Often framed via KPIs – key performance indicators, these transitional steps form the core categories contextualizing business data. But what 

    data matters?

    In the age of big data, businesses are producing larger amounts of information than ever before and there needs to be efficient ways to categorize and interpret that data. That’s where AI comes in.

    Building Data Categories

    One of the longstanding challenges with KPI development is that there are countless divisions any given business can use. Some focus on website traffic while others are concerned with social media engagement, but the most important thing is to focus on real actions and not vanity measures. Even if it’s just the first step toward a sale, your KPIs should reflect value for your bottom line.


    Small But Powerful

    KPIs typically cover a variety of similar actions – all Facebook behaviors or all inbound traffic, for example. The alternative, though, is to break down KPI-type behaviors into something known as micro conversions. 

    Micro conversions are simple behaviors that signal movement toward an ultimate goal like completing a sale, but carefully gathering data from micro conversions and tracking them can also help identify friction points and other barriers to conversion. This is especially true any time your business undergoes a redesign or institutes a new strategy. Comparing micro data points from the different phases, then, is a high value means of assessment.

    AI Interpretation

    Without AI, this micro data would be burdensome to manage – there’s just so much of it –but AI tools are both able to collect data and interpret it for application, particularly within comparative frameworks. All AI needs is well-developed KPIs.

    Business KPIs direct AI data collection, allow the system to identify shortfalls, and highlight performance goals that are being met, but it’s important to remember that AI tools can’t fix broader strategic or design problems. With the rise of machine learning, some businesses have come to believe that AI can solve any problem, but what it really does it clarify the data at every level, allowing your business to jump into action.

    Micro Mapping

    Perhaps the easiest way to describe what AI does in the age of big data is with a comparison. Your business is a continent and AI is the cartographer that offers you a map of everything within your business’s boundaries. Every topographical detail and landmark is noted. But the cartographer isn’t planning a trip or analyzing the political situation of your country. That’s up to someone else. In your business, that translates to the marketing department, your UI/UX experts, or C-suite executives. They solve problems by drawing on the map.

    Unprocessed big data is overwhelming – think millions of grains of sand that don’t mean anything on their own. AI processes that data into something useful, something with strategic value. Depending on your KPI, AI can even draw a path through the data, highlighting common routes from entry to conversion, where customers get lost – what you might consider friction points, and where they engage. When you begin to see data in this way, it becomes clear that it’s a world unto itself and one that has been fundamentally incomprehensible to users. 

    Even older CRM and analytics programs fall short when it comes to seeing the big picture and that’s why data management has changed so much in recent years. Suddenly, we have the technology to identify more than click-through-rates or page likes. AI fueled by big data is a new organization era with an emphasis on action. If you’re willing to follow the data, AI will draw you the map


    Author: Lary Alton

    Source: Information Management

  • Pattern matching: The fuel that makes AI work

    Pattern matching: The fuel that makes AI work

    Much of the power of machine learning rests in its ability to detect patterns. Much of the basis of this power is the ability of machine learning algorithms to be trained on example data such that, when future data is presented, the trained model can recognize that pattern for a particular application. If you can train a system on a pattern, then you can detect that pattern in the future. Indeed, pattern matching in machine learning (and its counterpart in anomaly detection) is what makes many applications of artificial intelligence (AI) work, from image recognition to conversational applications.

    As you can imagine, there are a wide range of use cases for AI-enabled pattern and anomaly detection systems. Pattern recognition, one of the seven core patterns of AI applications, is being applied to fraud detection and analysis, finding outliers and anomalies in big stacks of data; recommendation systems, providing deep insight into large pools of data; and other applications that depend on identification of patterns through training.

    Fraud detection and risk analysis

    One of the challenges with existing fraud detection systems is that they are primarily rules-based, using predefined notions of what constitutes fraudulent or suspicious behavior. The problem is that humans are particularly creative at skirting rules and finding ways to fool systems. Companies looking to reduce fraud, suspicious behavior or other risk are finding solutions in machine learning systems that can either be trained to recognize patterns of fraudulent behavior or, conversely, find outliers and anomalies to learned acceptable behavior.

    Financial systems, especially banking and credit card processing institutions, are early adopters in using machine learning to enable real-time identification of potentially fraudulent transactions. AI-based systems are able to handle millions of transactions per minute and use trained models to make millisecond decisions as to whether a particular transaction is legitimate. These models can identify which purchases don't fit usual spending patterns or look at interactions between paying parties to decide if something should be flagged for further inspection.

    Cybersecurity firms are also finding significant value in the application of machine learning-based pattern and anomaly systems to bolster their capabilities. Rather than depending on signature-based systems, which are primarily oriented toward responding to attacks that have already been reported and analyzed, machine learning-based systems are able to detect anomalous system behavior and block those behaviors from causing problems to the systems or networks.

    These AI-based systems are able to adapt to continuously changing threats and can more easily handle new and unseen attacks. The pattern and anomaly systems can also help to improve overall security by categorizing attacks and improving spam and phishing detection. Rather than requiring users to manually flag suspicious messages, these systems can automatically detect messages that don't fit the usual pattern and quarantine them for future inspection or automatic deletion. These intelligent systems can also autonomously monitor software systems and automatically apply software patches when certain patterns are discovered.

    Uncovering insights in data

    Machine learning-based pattern recognition systems are also being applied to extract greater value from existing data. Machines can look at data to find insights, patterns and groupings and use the power of AI systems to find patterns and anomalies humans aren't always able to see. This has broad applicability to both back-office and front-office operations and systems. Whereas, before, data visualization was the primary way in which users could extract value from large data sets, machine learning is now being used to find the groupings, clusters and outliers that might indicate some deeper connection or insight.

    In one interesting example, through machine learning pattern analysis, Walmart discovered consumers buy strawberry pop-tarts before hurricanes. Using unsupervised learning approaches, Walmart identified the pattern of products that customers usually buy when stocking up ahead of time for hurricanes. In addition to the usual batteries, tarps and bottled water, it discovered that the rate of purchase of strawberry pop-tarts also increased. No doubt, Walmart and other retailers are using the power of machine learning to find equally unexpected, high-value insights from their data.

    Automatically correcting errors

    Pattern matching in machine learning can also be used to automatically detect and correct errors. Data is rarely clean and often incomplete. AI systems can spot routine mistakes or errors and make adjustments as needed, fixing data, typos and process issues. Machines can learn what normal patterns and behavior look like, quickly spot and identify errors, automatically fix issues on its own and provide feedback if needed.

    For example, algorithms can detect outliers in medical prescription behavior, flag these records in real time and send a notification to healthcare providers when the prescription contains mistakes. Other automated error correction systems are assisting with document-oriented processes, fixing mistakes made by users when entering data into forms by detecting when data such as names are placed into the wrong fields or when other information is incomplete or inappropriately entered.

    Similarly, AI-based systems are able to automatically augment data by using patterns learned from previous data collection and integration activities. Using unstructured learning, these systems can find and group information that might be relevant, connecting all the data sources together. In this way, a request for some piece of data might also retrieve additional, related information, even if not explicitly requested by the query. This enables the system to fill in the gaps when information is missing from the original source, correct errors and resolve inconsistencies.

    Industry applications of pattern matching systems

    In addition to the applications above, there are many use cases for AI systems that implement pattern matching in machine learning capabilities. One use case gaining steam is the application of AI for HR and staffing. AI systems are being tasked to find the best match between job candidates and open positions. While traditional HR systems are dependent on humans to make the connection or use rules-based matching systems, increasingly, HR applications are making use of machine learning to learn what characteristics of employees make the best hires. The systems learn from these patterns of good hires to identify which candidates should float to the surface of the resume pile, resulting in more optimal matches.

    Since the human is eliminated in this situation, AI systems can be used to screen candidates and select the best person, while reducing the risk of bias and discrimination. Machine learning systems can sort through thousands of potential candidates and reach out in a personalized way to start a conversation. The systems can even augment the data in the job applicant's resume with information it gleans from additional online sources, providing additional value.

    In the back office, companies are applying pattern recognition systems to detect transactions that run afoul of company rules and regulations. AI startup AppZen uses machine learning to automatically check all invoices and receipts against expense reports and purchase orders. Any items that don't match acceptable transactional patterns are sent for human review, while the rest are expedited through the process. Occupational fraud, on average, costs a company 5% of its revenues each year, with the annual median loss at $140,000, and over 20% of companies reporting losses of $1 million or more.

    The key to solving this problem is to put processes and controls in place that automatically audit, monitor, and accept or reject transactions that don't fit a recognized pattern. AI-based systems are definitely helping in this way, and we'll increasingly see them being used by more organizations as a result.

    Author: Ronald Schmelzer

    Source: TechTarget

  • Preserving privacy within a population: differential privacy

    Preserving privacy within a population: differential privacy

    In this article, I will present the definition of differential privacy and preserving privacy and personal data of users while using their data in training machine learning models or driving insights using data science technologies.

    What is differential privacy?

    Differential privacy describes a promise, made by a data holder, or curator, to a data subject:

    ''You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis,no matter what other studies, data sets, or information sources, are available.''

    At their best, differentially private database mechanisms can make confidential data widely available for accurate data analysis, without resorting to data clean rooms, data usage agreements, data protection plans, or restricted views.

    Nonetheless, data utility will eventually be consumed: the Fundamental Law of Information Recovery states that overly accurate answers to too many questions will destroy privacy in a spectacular way.

    Differential privacy addresses the paradox of learning nothing about an individual while learning useful information about a population.

    A medical database may teach us that smoking causes cancer, affecting an insurance company’s view of a smoker’s long-term medical costs. 

    Has the smoker been harmed by the analysis?

    Perhaps — his insurance premiums may rise, if the insurer knows he smokes. He may also be helped — learning of his health risks, he enters a smoking cessation program.

    Has the smoker’s privacy been compromised?

    It is certainly the case that more is known about him after the study than was known before, but was his information “leaked”?

    Differential privacy will take the view that it was not, with the rationale that the impact on the smoker is the same independent of whether or not he was in the study. It is the conclusions reached in the study that affect the smoker, not his presence or absence in the data set

    Differential privacy ensures that the same conclusions, for example, smoking causes cancer, will be reached, independent of whether any individual opts into or opts out of the data set.

    Artificial Intelligence and the privacy paradox

    Consider an institution, e.g. the National Institutes of Health, the Census Bureau, or a social networking company, in possession of dataset containing sensitive information about individuals. For example, the dataset may consist of medical records, socioeconomic attributes, or geolocation data. The institution faces an important tradeoff when deciding how to make this dataset available for statistical analysis.

    On one hand, if the institution releases the dataset (or at least statistical information about it), it can enable important research and eventually inform policy decisions.

    On the other hand, for a number of ethical and legal reasons it is important to protect the individual-level privacy of the data subjects. The field of privacy-preserving data analysis aims to reconcile these two objectives. That is, it seeks to enable rich statistical analyses on sensitive datasets while protecting the privacy of the individuals who contributed to them.

    Differential privacy and Machine Learning

    One of the most useful tasks in data analysis is machine learning: the problem of automatically finding a simple rule to accurately predict certain unknown characteristics of never before seen data.

    Many machine learning tasks can be performed under the constraint of differential privacy. In fact, the constraint of privacy is not necessarily at odds with the goals of machine learning, both of which aim to extract information from the distribution from which the data was drawn, rather than from individual data points.

    The goal in machine learning is very often similar to the goal in private data analysis. The learner typically wishes to learn some simple rule that explains a data set. However, she wishes this rule to generalize — that is, it should be that the rule she learns not only correctly describes the data that she has on hand, but that it should also be able to correctly describe new data that is drawn from the same distribution.

    Generally, this means that she wants to learn a rule that captures distributional information about the data set on hand, in away that does not depend too specifically on any single data point.

    Of course, this is exactly the goal of private data analysis — to reveal distributional information about the private data set, without revealing too much about any single individual in the dataset (you remember the over-fitting phenomena?).

    It should come as no surprise then that machine learning and private data analysis are closely linked. In fact, as we will see, we are often able to perform private machine learning nearly as accurately, with nearly the same number of examples as we can perform non-private machine learning.

    Cryptography and privacy

    Some recent work has focused on machine learning or general computation over encrypted data.

    Recently, Google deployed a new system for assembling a deep learning model form thousands of locally-learned models while preserving privacy, which they call Federated Learning.


    Differential privacy should not be seen as a limitation in any context. However, we should look at it as a privacy-dog watching our compliance with standards that handles the sensitive data. We generate data more than what we think and we leave digital footprint everywhere, thus; as researchers in machine learning and data science, we should focus more on this topic and find a fair trade-off between privacy and accurate models.

  • Preventing fraud by using AI technology

    Preventing fraud by using AI technology

    As fraudsters become increasingly more professional and technologically advanced, financial organizations need to rely on products that use artificial intelligence (AI) for to prevent fraud.

    Identity verification technology vendor Jumio released Jumio Go, a real-time, automated platform for identity verification. Coming at a time when cybersecurity is at risk more than ever because cybercriminals are becoming more and more technologically advanced, Jumio Go uses a combination of AI, optical character recognition and biometrics to automatically verify a user's identity in real time.

    Jumio, founded in 2010, has long sold an AI for fraud prevention platform used by organizations in financial services, travel, gaming and retail industries. The Palo Alto, Calif., vendor's new Jumio Go platform builds on its existing technologies, which include facial recognition and verification tools, while also simplifying them.

    Jumio Go, launched Oct. 28, provides real-time identity verification, giving users results much faster than Jumio's flagship product, which takes 30 to 60 seconds to verify a user, according to Jumio. It also eliminates the need to add a component, meaning the process of matching a real-time photo of a user's face to a saved photo is entirely automated. That speeds up the process, and enables employees to take on other tasks, but also potentially could make it a little less secure.

    The new product accepts fewer ID documents than Jumio's flagship platform, but the tradeoff is the boost in real-time speed. Using natural language processing, Jumio's platforms can read through and extract relevant information from documents. The system scans that information for irregularities, such as odd wordings or misspellings, which could indicate a fraud.

    AI for fraud prevention in finance

    For financial institutions, whose customers conduct much more business online, this type of fraud detection and identity verification technology is vital.

    For combating fraud, 'leveraging AI is critical', said Amyn Dhala, global product lead at AI Express, Mastercard's methodology for the deployment of AI that grew out of the credit card company's 2017 acquisition of Brighterion.

    Through AI Express, Mastercard sells AI for fraud prevention tools, as well as AI-powered technologies, to help predict credit risk, manage network security and catch money-laundering.

    AI, Dhala said in an interview at AI World 2019 in Boston, is 'important to provide a better customer experience and drive profitability', as well as to ensure customer safety.

    The 9 to 5 fraudster

    For financial institutions, blocking fraudsters is no simple task. Criminals intent on fraud are taking a professional approach to their work, working for certain hours during the week and taking weekends off, according to an October 2019 report from Onfido, a London-based vendor of AI-driven identity software.

    Also, today's fraudsters are highly technologically skilled, said Dan Drapeau, head of technology at Blue Fountain Media, a digital marketing agency owned by Pactera, a technology consulting and implementation firm based in China.

    'You can always throw new technology at the problem, but cybercriminals are always going to do something new and innovative, and AI algorithms have to catch up to that', Drapeau said. 'Cybercriminals are always that one step ahead'.

    'As good as AI and machine learning get, it still will always take time to catch up to the newest innovation from criminals', he added.

    Still, by using AI for fraud prevention, financial organizations can stop good deal of fraud automatically, Drapeau said. Now, combining AI with manual work, such as checking or double-checking data and verification documents, works best, he said.

    Author: Mark Labbe

    Source: TechTarget

  • Pyramid Analytics' 5 main takeaways from the Insurance AI and Analytics USA conference in Chicago

    Pyramid Analytics' 5 main takeaways from the Insurance AI and Analytics USA conference in Chicago

    Pyramid Analytics was thrilled to participate in the Insurance AI and Analytics USA conference in beautiful Chicago, May 2-3. The goal of the conference was to provide education to insurance leaders looking for ways to use AI and ML to extract more value out of their data. In all of their conversations, the eagerness to do more with data was palpable, but a tinge of frustration could be detected beneath the surface.

    Curious to understand this contradiction, they started most of their conversations with the same basic question: 'What brings you to the show?' Followed by a slightly deeper question: 'Where are you with your AI and ML initiatives?'

    The responses varied. However, a common thread emerged: despite the desire to incorporate AI and ML capabilities into routine business practices, roadblocks remain, regardless of carrier type. Chief among the concerns of the attendees was the ability to access data, it appears that data silos are alive and well. We also heard many express frustrations with the tools used to derive AI and ML insights.

    Here are some observations of the most common reasons for attending the show into five groups, organized by persona:

    1. Data scientists looking for deeper access to data 

    The data scientists seemed to struggle with data access, which is often trapped within departments throughout the organization. To do their jobs effectively, data scientists need to access data so they can unlock trapped business value. They were seeking solutions that would help them bridge the gap between data and analytics.

    2. Executives from traditional organizations trying to understand the way forward

    To varying degrees, the insurance executives had AI and ML programs in place but weren’t satisfied with the results. They attended the conference to learn how they could extract more value from their AI and ML initiatives.

    3. Sophisticated insurers seeking technology to gain an edge on the competition

    This was a general takeaway from indivivuals from newer insurance companies who fit squarely in the “early technology adopter” category. Lacking the constraints of typical insurers (legacy processes and systems), these individuals were seeking information on new technologies and hoping to build partnerships with vendors to achieve further differentiation.

    4. Data and technology vendors looking to build meaningful partnerships

    There were many representatives from data and technology companies seeking out insurance partners looking to advance their businesses at the margins, either by enriching existing data store or by finding new or unique data streams.

    5. Consultants promoting their unique approach to AI and ML initiatives

    It’s clear that AI and ML initiatives require more than just tools, people, and processes. They require strategic direction and a roadmap that builds consistency and accountability. There were a number of consultants making themselves available to insurers.

    Author: Michael Hollenbeck

    Source: Pyramid Analytics

  • Reusing data for ML? Hash your data before you create the train-test split

    Reusing data for ML? Hash your data before you create the train-test split

    The best way to make sure the training and test sets are never mixed while updating the data set.

    Recently, I was reading Aurélien Géron’s Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow (2nd edition) and it made me realize that there might be an issue with the way we approach the train-test split while preparing data for machine learning models. In this article, I quickly demonstrate what the issue is and show an example of how to fix it.

    Illustrating the issue

    I want to say upfront that the issue I mentioned is not always a problem per se and it all depends on the use case. While preparing the data for training and evaluation, we normally split the data using a function such as Scikit-Learn’s train_test_split . To make sure that the results are reproducible, we use the random_state argument, so however many times we split the same data set, we will always get the very same train-test split. And in this sentence lies the potential issue I mentioned before, particularly in the part about the same data set.

    Imagine a case in which you build a model predicting customer churn. You received satisfactory results, your model is already in production and generating value-added for a company. Great work! However, after some time, there might be new patterns among the customers (for example, global pandemic changed the user behavior) or you simply gathered much more data, as more customers joined the company. For any reason, you might want to retrain the model and use the new data for both training and validation.

    And this is exactly when the issue appears. When you use the good old train_test_split on the new data set (all of the old observations + the new ones you gathered since training), there is no guarantee that the observations you trained on in the past will still be used for training, and the same would be true for the test set. I will illustrate this with an example in Python:

    # import the libraries 
    import pandas as pd
    import numpy as np
    from sklearn.model_selection import train_test_split
    from zlib import crc32
    # generate the first DataFrame
    X_1 = pd.DataFrame(data={"variable": np.random.normal(size=1000)})
    # apply the train-test split
    X_1_train, X_1_test = train_test_split(X_1, test_size=0.2, random_state=42)
    # add new observations to the DataFrame
    X_2 = pd.concat([X_1, pd.DataFrame(data={"variable": np.random.normal(size=500)})]).reset_index(drop=True)
    # again, apply the train-test split to the updated DataFrame
    X_2_train, X_2_test = train_test_split(X_2, test_size=0.2, random_state=42)
    # see what is the overlap of indices
    print(f"Train set: {len(set(X_1_train.index).intersection(set(X_2_train.index)))}")
    print(f"Test set: {len(set(X_1_test.index).intersection(set(X_2_test.index)))}")
    # Train set: 669
    # Test set: 59

    First, I generated a DataFrame with 1000 random observations. I applied the 80–20 train-test split using a random_state to ensure the results are reproducible. Then, I created a new DataFrame, by adding 500 observations to the end of the initial DataFrame (resetting the index is important to keep track of the observations in this case!). Once again, I applied the train-test split and then investigated how many observations from the initial sets actually appear in the second ones. For that, I used the handy intersection method of a Python’s set. The answer is 669 out of 800 and 59 out of 200. This clearly shows that the data was reshuffled.

    What are the potential dangers of such an issue? It all depends on the volume of data, but it can happen that in an unfortunate random draw all the new observations will end up in one of the sets, and not help that much with proper model fitting. Even though such a case is unlikely, the more likely cases of uneven distribution among the sets are not that desirable either. Hence, it would be better to evenly distribute the new data to both sets, while keeping the original observations assigned to their respective sets.

    Solving the issue

    So how can we solve this issue? One possibility would be to allocate the observations to the training and test sets based on a certain unique identifier. We can calculate the hash of observations’ identifier using some kind of a hashing function and if the value is smaller than x% of the maximum value, we put that observation into the test set. Otherwise, it belongs to the training set.

    You can see an example solution (based on the one presented by Aurélien Géron in his book) in the following function, which uses the CRC32 algorithm. I will not go into the details of the algorithm, you can read about CRC here. Alternatively, here you can find a good explanation of why CRC32 can very well serve as a hashing function and what drawbacks it has — mostly in terms of security, but that is not a problem for us. The function follows the logic described in the paragraph above, where 2³² is the maximum value of this hashing function:

    def hashed_train_test_split(df, index_col, test_size=0.2):
        Train-test split based on the hash of the unique identifier.
        test_index = df[index_col].apply(lambda x: crc32(np.int64(x)))
        test_index = test_index < test_size * 2**32
        return df.loc[~test_index], df.loc[test_index]

    Note: The function above will work for Python 3. To adjust it for Python 2, we should follow crc32’s documentation and use it as follows: crc32(data) & 0xffffffff.

    Before testing the function in practice, it is really important to mention that you should use a unique and immutable identifier for the hashing function. And for this particular implementation, also a numeric one (though this can be relatively easily extended to include strings as well).

    In our toy example, we can safely use the row ID as a unique identifier, as we only append the new observations at the very end of the initial DataFrame and never delete any rows. However, this is something to be aware of while using this approach for more complex cases. So a good identifier might be the customer’s unique number, as by design those should only increase and there should be no duplicates.

    To confirm that the function is doing what we want it to do, we once again run the test scenario as shown above. This time, for both DataFrames we use the hashed_train_test_split function:

    # create an index column (should be immutable and unique)
    X_1 = X_1.reset_index(drop=False)
    X_2 = X_2.reset_index(drop=False)
    # apply the improved train-test split
    X_1_train_hashed, X_1_test_hashed = hashed_train_test_split(X_1, "index")
    X_2_train_hashed, X_2_test_hashed = hashed_train_test_split(X_2, "index")
    # see what is the overlap of indices
    print(f"Train set: {len(set(X_1_train_hashed.index).intersection(set(X_2_train_hashed.index)))}")
    print(f"Test set: {len(set(X_1_test_hashed.index).intersection(set(X_2_test_hashed.index)))}")
    # Train set: 800
    # Test set: 200

    While using the hashed unique identifier for the allocation, we achieved perfect overlap for both training and test sets.


    In this article, I showed how to use hashing functions to improve the default behavior of training-test split. The described issue is not very apparent for many data scientists, as it mostly occurs in case of retraining the ML models using new and updated data sets. So this is not really something often mentioned in textbooks or one does not come across it while playing with example data sets, even the ones from Kaggle competitions. And I mentioned before, this might not even be an issue for us, as it really depends on the use case. However, I do believe that one should be aware of it and how to fix it if there is such a need.

    Author: Eryk Lewinson

    Source: Towards Data Science

  • Routinebanen worden opgeslokt door robots en artificial intelligence

    Robots en artificial intelligence zijn anno 2016 al ver genoeg ontwikkeld om een relatief groot deel van het fysieke voorspelbare werk en dataverwerkingstaken van mensen over te nemen. Bovendien zal technologische vooruitgang ervoor zorgen dat steeds meer taken van mensen worden overgenomen, wat ofwel leidt tot meer tijd voor andere taken, of een vermindering van het aantal menselijke werknemers.

    Automatisering en robotisering bieden de mensheid de mogelijkheid om zich te bevrijden van repetitief, fysiek werk, dat vaak als onplezierig of saai wordt ervaren. Hoewel het verdwijnen van dit werk zal zorgen voor positieve effecten op aspecten als gezondheid en werkkwaliteit, heeft de ontwikkeling ook negatieve effecten op de werkgelegenheid – zeker in banen waarvoor weinig vaardigheden gevraagd worden. De afgelopen jaren is er veel gesproken over de omvang van de bedreiging die robots vormen voor de banen van menselijke werknemers en een recent onderzoek van McKinsey & Company gooit nog meer olie op het vuur. Volgens schattingen van het Amerikaanse consultancykantoor zal op korte termijn tot wel 51% van al het werk in de Verenigde Staten zwaar worden getroffen door robotisering en AI-technologie. 

    Analyzing work activities

    Het onderzoek, dat is gebaseerd op een analyse van meer dan 2.000 werk-gerelateerde activiteiten in de VS in meer dan 800 arbeidsfuncties, suggereert dat voorspelbaar fysiek werk in relatief stabiele omgevingen de grootste kans loopt om te worden overgenomen door robots of een andere vorm van automatisering. Voorbeelden van dit soort omgevingen zijn onder meer de accommodatie en horecabranche, de maakindustrie en de retailsector. Vooral in de maakindustrie zijn de mogelijkheden voor robotisering groot – ongeveer een derde van al het werk in de sector kan als voorspelbaar worden beschouwd. Kijkend naar de huidige automatiseringstechnologie zou tot wel 78% van dit werk kunnen worden geautomatiseerd.

    Maar het is echter niet alleen simpel productiewerk dat kan worden geautomatiseerd, aangezien ook werk op het gebied van dataverwerking en dataverzameling met de huidige technologie al kan worden gerobotiseerd. Volgens berekeningen van McKinsey kan tot wel 47% van de taken van een retail salesmedewerker op dit gebied worden geautomatiseerd – al ligt dit nog altijd veel lager dan de 86% automatiseringspotentie in het data-gerelateerde werk van boekhouders, accountants en auditors. 

    Automation is technically feasible

    In het onderzoek werd ook in kaart gebracht welke functies de meeste potentie voor automatisering hebben. Onderwijsdiensten en management lijken, kijkend naar de huidige technologie, de vakgebieden die het minst getroffen zullen worden door robotisering en AI-technologie. Vooral in het onderwijs zijn de percentages automatiseerbare taken laag, met weinig dataverzameling, -verwerking en voorspelbaar fysiek werk. Managers kunnen wel enige automatisering verwachten in hun werk, vooral op het gebied van dataverwerking en verzameling. In de bouw en landbouwsector is er sprake van veel werk dat als onvoorspelbaar kan worden beschouwd. De onvoorspelbare aard van deze werkzaamheden beschermt arbeiders in deze segmenten, omdat deze taken minder eenvoudig te automatiseren zijn.

    McKinsey benadrukt dat de analyse zich richt op het vermogen van de huidige technologieën om taken van mensen over te nemen. Dat dit technologisch mogelijk is, betekent volgens het consultancybureau niet dat deze werkzaamheden ook daadwerkelijk zullen worden overgenomen door robots of intelligente technologie. In het onderzoek wordt namelijk geen rekening gehouden met de implementatiekosten van deze technologie, of naar de grenzen van automatisering. Daardoor zullen werknemers in bepaalde gevallen goedkoper en beter beschikbaar blijven dan een gerobotiseerd systeem.

    Met het oog op de toekomst, voorspellen de onderzoekers dat met de komst van nieuwe technologieën op het gebied van robotisering en kunstmatige intelligentie er ook meer taken geautomatiseerd kunnen worden. Vooral technologie die het mogelijk maakt om natuurlijke gesprekken te voeren met robots, waarbij de machines menselijke taal kunnen begrijpen en automatisch kunnen antwoorden, zal volgens de onderzoekers een grote impact hebben op de mogelijkheden voor verdere robotisering.

    Bron: Consultancy.nl, 3 oktober 2016


  • Running a tech company? Use data science and AI to improve

    Running a tech company? Use data science and AI to improve

    There are a lot of great benefits of artificial intelligence with startups that can't be overlooked.

    As a tech company, you will always be looking for ways to develop. Using data science and artificial intelligence can be useful for this type of growth. While they share some similarities, there are also some differences between the two. You may be surprised to hear about the amazing benefits that AI offers for startups, especially those in the tech sector.

    Artificial Intelligence

    You may have heard artificial intelligence being referred to as AI in countless movies and TV shows. In real life, it’s used for creating improvement rather than turning on humanity, as it is so often shown to be doing on screen, even though this makes interesting viewing.

    AI has many uses, such as helping with translations, analyzing complex information and decision-making. It also has the ability to learn and therefore improve and adapt.

    Rodrigo Liang is CEO of SambaNova, which provides both hardware and software to businesses for the purpose of analyzing data. While this can be classed as data science, one difference is that data science tends to use a predictive model to make its analysis, while AI can be capable of analyzing based on learned knowledge and facts. This information may not have been programmed, which is why AI can be more precise and take factors into account that weren’t previously considered.

    Data science

    Data science covers a broad range of techniques, including statistics, design and development. It can be used to achieve quick mathematical calculations and find hidden patterns and trends in the data it analyzes, but it needs an element of human intervention. One difference is that using AI can remove the need for human input as it learns and develops.

    The programming for data science relies on already having statistics and predictive trends to work with. This information can then be used to find patterns and other details that might not be immediately obvious without hours, days or even weeks of human analysis.

    Both AI and data science can be used interchangeably depending on what is required, and they can complement each other.

    The benefits to your tech company

    One way that AI can be used to benefit your tech company is to carry out risk analysis. Otherwise, this can be an expensive task, particularly in the event of human error. It also saves time, as AI can process and analyze large amounts of information much quicker than a person can. Therefore, although the initial outlay might be high, the savings to your business will more than compensate for this. One example of this is fraud detection, which in some cases could be enough to force a business to close if it’s not caught or prevented in time.

    AI can also help with translating different languages. Most businesses rely on trading with customers and other businesses around the world, but the language barrier can make that more difficult. If you need to meet with or send emails to clients, or create content for speakers of other languages, hiring a translator can be expensive. It’s also risky if you’re dealing with sensitive information. That’s why AI is so popular for translating. It not only saves money but also inspires trust in your company, as the information is kept secure.

    Data science can be used to spot trends and patterns in your business. This is useful if you need to cut costs in areas that are losing money for your business, or if you need to focus your attention on more successful aspects to boost these further. No successful tech company will want to continue spending money on the parts of it that aren’t cost-efficient. AI can work well with data science here, by thinking logically to find a viable solution and make improvements.

    Although AI can translate human facial expressions, tone of voice and body languages to interpret human emotion, the ability to find solutions using logic can be an advantage to your tech company. While you and your employees may try to operate fairly and make the right decisions, it’s difficult to be completely impartial. We have our own thoughts and opinions, and these can shape our decisions, whether or not we want them to.

    When it comes to repetition in the workplace, nobody wants to be stuck doing the same thing repeatedly. It’s no good for the morale of your employees, and they will eventually leave if they don’t feel like they’re getting job satisfaction. It may feel like using data science and AI in your tech company will kill off jobs for humans. However, it’s just as likely that they can be better placed doing other less menial tasks within your company. If using these technologies results in fewer financial losses and more gains, then there is no reason why employees can’t be relocated elsewhere in the company on more appealing tasks.

    AI offers great advantages for tech startups

    AI and data science can be great for your tech company, removing or lowering risk, increasing profits, and generally helping you run your company with fewer problems, and with fewer job losses than you might think. Any initial costs will usually be recuperated.

    Author: Matt James

    Source: Smart Data Collective

  • SAS: 4 real-world artificial intelligence applications

    SAS: 4 real-world artificial intelligence applications

    Everyone is talking about AI (artificial intelligence). Unfortunately, a lot of what you hear about AI in movies and on the TV is sensationalized for entertainment.

    Indeed, AI is overhyped. But AI is also real and powerful.

    Consider this: engineers worked for years on hand-crafted models for object detection, facial recognition and natural language translation. Despite honing those algorithms by the best of our species, their performance does not come close to what data-driven approaches can accomplish today. When we let algorithms discover patterns from data, they outperform human coded logic for many tasks, that involve sensing the natural world.

    The powerful message of AI is not that machines are taking over the world. It is that we can guide machines to generate tremendous value by unlocking the information, patterns and behaviors that are captured in data.

    Today I want to share four real-world applications of SAS AI and introduce you to five SAS employees who are working to put this technology into the hands of decision makers, from caseworkers and clinicians to police officers and college administrators.

    Augmenting health care with medical image analysis

    Fijoy Vadakkumpadan, a Senior Staff Scientist on the SAS Computer Vision team, is no stranger to the importance of medical image analysis. He credits ultrasound technology with helping to ensure a safe delivery of his twin daughters four years ago. Today, he is excited that his work at SAS could make a similar impact on someone else’s life.

    Recently, Fijoy’s team has extended the SAS Platform to analyze medical images. The technology uses an artificial neural network to recognize objects on medical images and thus improve healthcare.

    Designing AI algorithms you can trust

    Xin Hunt, a Senior Machine Learning Developer at SAS, hopes to have a big impact on the future of machine learning. She is focused on interpretability and explainability of machine learning models, saying, 'In order for society to accept it, they have to understand it'.

    Interpretability uses a mathematical understanding of the outputs of a machine learning model. You can use interpretability methods to show how the model reacts to changes in the inputs, for example.

    Explainability goes further than that. It offers full verbal explanations of how a model functions, what parts of the model logic were derived automatically, what parts were modified in post-processing, how the model meets regulations, and so forth.

    Making machine learning accessible to everyone

    From exploring and transforming data to selecting features and comparing algorithms, there are multiple steps to building a machine learning model. What if you could apply all those steps with the click of a button?

    That’s what the development teams of Susan Haller and Dragos Coles have done. Susan is the Director of Advanced Analytics R&D and Dragos is a Senior Machine Learning Developer at SAS. They are showing a powerful tool that offers an API for a dynamic, automated model building. The model is completely transparent, so you examine and modify it after it is built.

    Deploying AI models in the field

    You can do everything right when building and refining a machine learning model, but if you do not deploy it where decisions are made it will not do any good.

    Seb Charrot, a Senior Manager in the Scottish R&D Team, enjoys deploying analytics to solve real problems for real people. He and his team build SAS Mobile Investigator, an application that allows caseworkers, investigators and officers in the field to receive tasks, be notified of risks and concerns regarding their caseload or coverage area, and raise reports on the go.

    Moving AI into the real world

    When you move past the science project phase of analytics and build solutions for the real world, you will find that you can enable everyone, not just those with data science degrees, to make decisions based on data. As a result, everyone’s jobs become easier and more productive. Plus, increased access to analytics leads to faster and more reliable decisions. Technology is unstoppable, it is who we are, it is what we do. Not just at SAS, but as a species.

    Author: Oliver Schabenberger

    Source: SAS

  • Should we fear Artificial Intelligence

    should we fear AIIf you watch films and TV shows, in which AI has been exploited to create any number of apocalyptic scenarios, the answer might be yes. After watching Blade Runner or The Matrix or, as a more recent example, Ex Machina, it’s easier to understand why AI touches off visceral reactions in the layman.

    It’s no secret that automation has posed a real threat to lower-skilled workers in blue collar industries, and that has grown into a fear of all forms of artificial intelligence. But a lot of complexities stand between where we are today and production AI, particularly the struggle to bridge the AI chasm. In other words, the type of AI Hollywood suggests we should fear, taking our jobs and possibly more, is a long way off.

    At the other end of the pop culture spectrum, we have people who have embraced AI as the future of mankind. Google’s chief futurist Ray Kurzweil is a great example of thinkers who have championed AI as the next step in the evolution of human intelligence. So which version is our AI future?

    The truth is likely somewhere in the middle. Artificial intelligence won’t compete against humans with extinction-level stakes à la Terminator, at least in forthcoming years; nor will it transcend us as Kurzweil suggests. The likeliest outcome in the near future is we carve out symbiotic roles for the two, because of their respective shortcomings.

    While many people expect all AI they interact with to pass the Turing test, the human brain is the most advanced machine we know of. Thanks to emotional intelligence, humans can interpret and adapt in real time to changing circumstances, and react differently to the same stimuli. Humans and their emotional intelligence make it tough for AI to be benchmarked.

    We are all talking about Amazon Go, Amazon’s attempt to bring its website to life in fully automated 3D retail centers. But who will customers talk to when an item is missing or a mistake is made in billing? We want human interactions, like a conversation with the neighborhood baker (if you’re French like me) or the opinion of a salesperson on the fit of a jacket. Now we also want efficiency, but not to the exclusion of adaptable and sympathetic emotional intelligence. 

    In some situations, efficiency and safety are preferred over empathy, or creativity. For instance, many favor of the delegation of hazardous tasks in factories or oilfields to machines, letting humans handle higher level strategic tasks like managing employees or drawing on both the left and right brain to flesh out designs.

    The world is becoming a more complex place and we can welcome more AI to help us navigate it. Consider the accelerating advance of research in many scientific fields, making staying an expert even in a well-defined field a real challenge. The issue is not just that your field is growing, but that it touches on and draws from many other fields that are growing as well. As a result, knowledge bases are growing exponentially.

    A heart surgeon faced with a tough choice may consult a few books or a couple of experts and then identify patterns and weight different outcomes to make a decision. Instead, they could draw on an AI to assimilate the knowledge base to reach a logical decision from a truly holistic standpoint. This does not guarantee that it will be the right answer. Machine Learning can help the surgeon weigh thousands of similar cases, consider every medical angle, and even cross-reference the patient’s family history. The surgeon could even cover all this ground in less time than it would have taken to page through books or call advisors. But the purely logical decision should not be the right and final decision. Doing the right thing is different that having highest probability of success, and so the surgeon will have to consider empathy for the family, the quality of living of the patient, and many other emotional factors.

    For now, machine learning is the most straightforward AI component to implement, and the one critical to improving the human condition. ML limits AI outputs to assimilating large quantities of data and defining patterns, but it acknowledges that AI cannot evaluate complex, novel, or emotional variables and leaves multidimensional decision making to humans. 

    As researchers and futurists struggle to bring true AI to the masses, it will be a progressive transition. What I am interested to see is whether or not a rapid transition could trigger a generational clash.

    Just like pre-Internet and post-Internal generations, will be see a pre-AI and post-AI ones? If that’s the case, as with many technologies, the last generation to fear it may raise the first generation to embrace it.

    Author: Isabelle Guis 

  • Successfully implementing AI into practice

    Artificial Intelligence (AI) can be a real value driver for organizations. As the power of algorithms, computing and amounts of data surge, companies within manufacturing and industry start to see an increasing amount of use cases. These systems could drive efficiency and enhance capability. But also automatize tasks, decrease costs and improve revenue.

    Success and value generated by AI benefit from a good understanding and expectation of what the technology can deliver from the C-suite down. Organizations in general should also have a well-considered implementation process. This concludes IBM in the recently published white paper on AI. ‘Beyond the hype: A guide to understanding and successfully implementing artificial intelligence within your business’.

    Putting AI into practice: specific tasks

    AI is not about sentient robots and magic boxes. AI is a science and a set of computational technologies. These are inspired by the ways people use their nervous systems and their bodies to sense, learn, reason and take action. But typically operate quite differently. AI encompasses machine learning (machines that can learn from data – algorithms adjusting themselves) and deep learning (a combination of algorithms that are mutually linked).

    Within AI data scientists extract knowledge and interpret data by using the right tools and statistical methods. The machines learn to recognize patterns in the data that it is fed to them. And map these patterns to future outcomes.


    Relevant AI use cases span various areas across virtually every industry. But there are three main macro domains that continue to drive the adoption as well as the most economies across businesses. Cognitive engagement involves how to deliver new ways for humans to engage with machines. Cognitive insights and knowledge addresses how to augment humans who are overwhelmed with information and knowledge. And cognitive automation relates to move from process automation to mimicking human intelligence, to facilitate complex and knowledge-intense business decisions.

    Below are some examples of successful implementations within the industrial and manufacturing domain:

    • Using the many different available sensor measurements from large truck engines, a neural network at a manufacturer is trained to recognize normal and abnormal engine behavior. The model is able to detect when specific measurements were out of the ordinary. Anomalous sensor readings are highly predictive of pending engine failures.
    • At a car manufacturer through supervised learning techniques predictive models were developed that could provide an early warning of failure based on the different system messages and sensor readings that continuously stream from the production line. This early warning could be used to prioritize maintenance and reduce both downtime as well as false positives and needless efforts.
    • The output of machine learning-based predictive models with prescriptive, mathematical optimization models was used at a utility company to prescribe the optimal mix of power production sources to meet predicted demand and to minimize costs. This required both the prediction of demand as well as prediction of available solar and wind energy capacity.
    • To understand the business dynamics and create inventory of possibly relevant data sources a material producer used machine learning models to learn the price behavior and forecast future price development. The models also enabled buyers to evaluate their own ‘what if’ scenarios. This all came together for the user in an interactive dashboard.

    There are three main steps to implement AI:

    1. -Develop an AI strategy and roadmap
    2. -Establish AI capabilities and skills
    3. -Start small and scale quickly

    In the previously mentioned white paper IBM provides some practical recommendations to avoid frequent pitfalls such as cultural or managerial resistance, bad or insufficient data, too high or low expectations, lack of capabilities et cetera.

    Based on its experience and knowledge, IBM can help to successfully implement AI and guide organizations in the transformation to Industry 4.0. IBM enables companies to experiment with big ideas, acquire new expertise and build new enterprise-grade solutions for immediate market impact. It gives companies the speed of a start-up, at the scale and rigor of an enterprise.

    Author: Marloes Roelands

    Source: IBM

  • Taking advantage of automation technology in competitive intelligence

    Taking advantage of automation technology in competitive intelligence

    If you’re a market or product researcher, or an intelligence specialist, you’re probably already aware of the extent to which technologies like artificial intelligence (AI) and machine learning have altered the business landscape over the past decade. But many professionals still view AI with suspicion, even mistrust, after being over-sold on its capabilities and underinformed about its limitations.

    If you’re one of those people, it’s time to give AI another (cautious) chance.

    Hybrid solutions, sometimes called smart workflows, combine automation technology with human analysis in order to improve the efficiency and accuracy of a business process. Smart workflows automate repetitive, tedious, and time-consuming tasks, and freeing up time for humans to handle more the complex, strategic tasks that machines still struggle to execute. If you’re reluctant to trust a computer to conduct important research, smart workflows allow you to build in human checks and balances wherever you see fit.

    Here are three ways automation technology can save you time at different stages of your competitive intelligence process.

    Data collection 

    A competitive intelligence process is only as thorough as its data collection method. If you’re missing information during the initial intelligence gathering stage, none of your hard work afterwards can correct that deficit, you’ll always be left with an incomplete set of facts. That’s why automated intelligence gathering is growing in popularity, even among small-to-midsized businesses. By allowing a machine to do the first-line data collection, you remove the potential for human error in terms of overlooking relevant company names, keywords, or phrases. With ongoing maintenance, an automated data collection system can drastically reduce the number of manhours spent searching for information.


    When you’re dealing with a high volume of information, an automated classification system can give structure to the raw intelligence data, breaking it down into more manageable chunks for researchers and analysts to work with. Even if the information is particularly complication, a human curator can clean up pre-sorted data much more quickly than a raw, unorganized feed. The combination of machine power and human intelligence saves time and reduces employee burnout.


    Managing your competitive intelligence distribution process can be a time-consuming job in its own right, especially if you’re doing it all manually. Instead, many businesses are switching to an automated report model that takes your pre-classified intelligence data and distributes it to the intelligence users who need to see it, and have the right data skills to use it. Users can generally control what type of news they receive and when they receive it, without burdening the intelligence team with dozens, or even hundreds of unique schedules and content requests.

    Source: CI Radar

  • The 8 most important industrial IoT developments in 2019

    The 8 most important industrial IoT developments in 2019

    From manufacturing to the retail sector, the infinite applications of the industrial internet of things (IIoT) are disrupting business processes, thereby improving operational efficiency and business competitiveness. The trend of employing IoT-powered systems for supply chain management, smart monitoring, remote diagnosis, production integration, inventory management, and predictive maintenance is catching up as companies take bold steps to address a myriad of business problems.

    No wonder, the global technology spend on IoT is expected to reach USD 1.2 trillion by 2022. The growth of this segment will be driven by firms deploying IIoT solutions and giant tech organizations who are developing these innovative solutions.

    To help you stay ahead of the curve, we have enlisted a few developments that will dominate the industrial IoT sphere.

    1. Cobots are gaining popularity

    Digitization is having a major impact in the industrial robotics segment as connected cobots or collaborative robots, making their place in the smart manufacturing ecosystem. This trend is improving the efficiency of operations and the reliability of the production cycle.

    IIoT is making robots mobile and collaborative, offering technologies, such as self-driving vehicles (mobile collaborative robots), machine vision (part identification), and additive manufacturing that can boost production efficiency and business growth with an excellent ROI. No wonder, the global cobots market size has crossed USD 649 million in 2018 and is expected to expand at a CAGR of 44.5% between 2019 and 2025.

    2. Digital twins are on the rise

    A growing number of firms are deploying IoT solutions to develop a digital replica of their business assets. Thus, instead of sending data to each physical receiver separately, all the information is sent to the digital twin, enabling business units to access the data with ease.

    Digital twins are growing in popularity as they decrease the complexity of the IoT ecosystem while boosting its efficiency. Gartner shares that 24% of enterprises are already using digital twins and an additional 42% plan to ride on this wave in the coming three years.

    Smart businesses are already using digital twin software to incorporate process data, enabling them to reach accurate insights and address operational inefficiencies.

    3. Augmented reality is disrupting the manufacturing domain

    AR is benefiting the manufacturing domain in more ways than one. The technology has disrupted the manufacturing areas like product design and development, maintenance and field service, quality assurance, logistics, and hands-on training of new employees.

    For instance, in the assembling operations, AR is replacing the traditional paper instruction manual with IoT-enabled systems that have voice-controlled instructions along with a video from the previous assembly operation.  

    AR is also allowing manufacturing technicians to have access to instant intelligence and problem insights related to maintenance, thereby improving their efficiency and reducing equipment downtime.

    4. IoT-enabled predictive maintenance is becoming a part of the overall maintenance workflow

    With the advent of Industry 4.0, several enterprises are investing in IoT-enabled predictive maintenance of their assets to fix automated systems before they get disabled. In today’s competitive business environment, it is extremely important for firms to keep machines running seamlessly. Connected sensors and machine learning are helping companies anticipate component failures in advance, thereby reducing equipment downtime and time to locking up machines for preventative maintenance checks.

    As a result, many organizations are running predictive analytics and machine learning to monitor systems and gather data, allowing them to estimate when components are likely to fail.

    5. 5G will drive real-time IIoT applications

    5G deployments are digitizing the industrial domain and changing the way enterprises manage their business operations. Industries, namely transportation, manufacturing, healthcare, energy and utilities, agriculture, retail, media, and financial services will benefit from the low latency and high data transfer speed of 5G mobile networks.

    For instance, in the manufacturing domain, 5G will power factory automation, ensuring that the processes happen within the time frame, thereby reducing the risk of downtime. Further, 5G will help manufacturers in real-time production inspection and assembly line maintenance.

    6. Firms are shifting from centralized cloud to edge computing

    Until now, the centralized cloud was a popular choice among firms for controlling connected devices and data. However, with IoT devices and sensors expected to generate an ocean of data, more and more enterprises want IoT to monitor and report data and events remotely.

    Though most firms are using centralized cloud-based solutions to collect data, they are facing issues, such as high network load, poor response time, and security risks. Edge computing is helping businesses collect, analyze, and store data close to its source, thereby reducing the costs and security risks and improving system efficiency. That explains the growing demand for edge computing.

    A research report from Business Insider Intelligence forecasts that by 2020, there will be over 5,635 million smart sensors and other IoT devices globally, generating over 507.5 zettabytes of data. The need to collect and process this data at local collection points is what’s triggering the shift from centralized cloud to edge computing.

    7. Firms will continue to invest in cybersecurity

    Cybersecurity threats continue to evolve each day. Connected systems pose a serious threat to data and cause massive system disruption and loss to the firm. A 2018 Data Breach study by IBM revealed that the cost of an average data breach to companies globally is USD 3.86 million.

    As a result, an increasing number of firms are investing in innovative services like virtual private network (VPN) to access the internet safely. Such innovative security solutions are becoming increasingly popular with enterprises across domains.

    8. IoT analytics is gaining significance

    While sectors such as manufacturing, aerospace, and energy and utilities are deploying IoT-powered sensors and wireless technologies, the true value of industrial IoT lies in analytics. The connected systems generate a large amount of data that needs to be effectively employed to optimize operations. Thus, the demand for  IoT analytics will rise in the coming years. As a result, firms will have to depend on AI and ML technologies to find and effective ways to manage the data overload.

    Companies like SAS, SAP, and Teradata are already offering advanced analytics software to help enterprises evaluate real-time data streaming from connected systems on the shop floor.

    Going forward

    IIoT is all set to fuel the fourth industrial revolution. Firms across various industries are adopting innovative IoT devices and technologies to accelerate business growth. These IIoT deployments will help enterprises improve operational efficiency, reduce downtime, and get a serious competitive advantage in their respective domains.

    The IIoT developments shared in this post will set the stage for innovative enterprise platforms and tech advancements. Organizations wanting to remain competitive should be not only aware of these trends but also take adequate measures to embrace them.

    Source: Datafloq

  • The ability to speed up the training for deep learning networks used for AI through chunking

    The ability to speed up the training for deep learning networks used for AI through chunking

    At the International Conference on Learning Representations on May 6, IBM Research shared a look around how chunk-based accumulation can speed the training for deep learning networks used for artificial intelligence (AI)

    The company first shared the concept and its vast potential at last year’s NeurIPS conference, when it demonstrated the ability to train deep learning models with 8-bit precision while fully preserving model accuracy across all major AI data set categories: image, speech and text. The result? This technique could accelerate training time for deep neural networks by two to four times over today’s 16-bit systems.

    In IBM Research’s new paper, titled 'Accumulation Bit-Width Scaling For Ultralow Precision Training of Deep Networks', researchers explain in greater depth exactly how the concept of chunk-based accumulation works to lower the precision of accumulation from 32-bits down to 16-bits. 'Chunking' takes the product and divides it into smaller groups of accumulation and then adds the result of each of these smaller groups together, leading to a significantly more accurate result than that of normal accumulation. This allows researchers to study new networks and improve the overall efficiency of deep learning hardware.

    Although this approach was previously considered infeasible to further reduce precision for training, IBM expects this 8-bit training platform to become a widely adopted industry standard in the coming years.

    Author: Daniel Gutierrez

    Source: Insidebigdata

  • The big data race reaches the City

    coloured-high-end-data-cables-large transEduPGWXTgvtbFyMaMlYatm4ovIMMP 5WSTNAIgCzTy4

    Vast amounts of information are being sifted for the good of commercial interests as never before

    IBM’s Watson supercomputer, once known for winning the television quiz show Jeopardy! in 2011, is now sold to wealth management companies as an affordable way to dispense investment advice. Twitter has introduced “cashtags” to its stream of social chatter so that investors can track what is said about stocks. Hedge funds are sending up satellites to monitor crop yields before even the farmers know how they’re doing.

    The world is awash with information as never before. According to IBM, 90pc of all existing data was created in the past two years. Once the preserve of academics and the geekiest hedge fund managers, the ability to harness huge amounts of noise and turn it into trading signals is now reaching the core of the financial industry.

    Last year was one of the toughest since the financial crisis for asset managers, according to BCG partner Ben Sheridan, yet they have continued to spend on data management in the hope of finding an edge in subdued markets.

    “It’s to bring new data assets to bear on some of the questions that asset managers have always asked, like macroeconomic movements,” he said.

    “Historically, these quantitative data aspects have been the domain of a small sector of hedge funds. Now it’s going to a much more mainstream side of asset managers.”

    59823675 The headquarters of HSBC Holdings Plc left No 1 Canada Square or Canary Wharf Tower cen-large transgsaO8O78rhmZrDxTlQBjdEbgHFEZVI1Pljic pW9c90 
    Banks are among the biggest investors in big data

    Even Goldman Sachs has entered the race for data, leading a $15m investment round in Kensho, which stockpiles data around major world events and lets clients apply the lessons it learns to new situations. Say there’s a hurricane striking the Gulf of Mexico: Kensho might have ideas on what this means for US jobs data six months afterwards, and how that affects the S&P stock index.

    Many businesses are using computing firepower to supercharge old techniques. Hedge funds such as Winton Capital already collate obscure data sets such as wheat prices going back nearly 1,000 years, in the hope of finding patterns that will inform the future value of commodities.

    Others are paying companies such as Planet Labs to monitor crops via satellite almost in real time, offering a hint of the yields to come. Spotting traffic jams outside Wal-Marts can help traders looking to bet on the success of Black Friday sales each year – and it’s easier to do this from space than sending analysts to car parks.

    Some funds, including Eagle Alpha, have been feeding transcripts of calls with company executives into a natural language processor – an area of artificial intelligence that the Turing test foresaw – to figure out if they have gained or lost confidence in their business. Trades might have had gut feelings about this before, but now they can get graphs.

    biggest spenders

    There is inevitably a lot of noise among these potential trading signals, which experts are trying to weed out.

    “Most of the breakthroughs in machine-learning aren’t in finance. The signal-to-noise ratio is a problem compared to something like recognising dogs in a photograph,” said Dr Anthony Ledford, chief scientist for the computer-driven hedge fund Man AHL.

    “There is no golden indicator of what’s going to happen tomorrow. What we’re doing is trying to harness a very small edge and doing it over a long period in a large number of markets.”

    The statistics expert said the plunging cost of computer power and data storage, crossed with a “quite extraordinary” proliferation of recorded data, have helped breathe life into concepts like artificial intelligence for big investors.

    “The trading phase at the moment is making better use of the signals we already know about. But the next research stage is, can we use machine learning to identify new features?”

    AHL’s systematic funds comb through 2bn price updates on their busiest days, up from 800m during last year’s peak.

    Developments in disciplines such as engineering and computer science have contributed to the field, according to the former academic based in Oxford, where Man Group this week jointly sponsored a new research professorship in machine learning at the university.

    google-driverless 3147440b 1-large transpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa RXJU8
    The artificial intelligence used in driverless cars could have applications in finance

    Dr Ledford said the technology has applications in driverless cars, which must learn how to drive in novel conditions, and identifying stars from telescope images. Indeed, he has adapted the methods used in the Zooniverse project, which asked thousands of volunteers to help teach a computer to spot supernovae, to build a new way of spotting useful trends in the City’s daily avalanche of analyst research.

    “The core use is being able to extract patterns from data without specifically telling the algorithms what patterns we are looking for. Previously, you would define the shape of the model and apply it to the data,” he said.

    These technologies are not just been put to work in the financial markets. Several law firms are using natural language processing to carry out some of the drudgery, including poring over repetitive contracts.

    Slaughter & May has recently adopted Luminance, a due diligence programme that is backed by Mike Lynch, former boss of the computing group Autonomy.

    Freshfields has spent a year teaching a customised system known as Kira to understand the nuances of contract terms that often occur in its business.

    Its lawyers have fed the computer documents they are reading, highlighting the parts they think are crucial. Kira can now parse a contract and find the relevant paragraphs between 40pc and 70pc faster than a human lawyer reviewing it by hand.

    “It kicks out strange things sometimes, irrelevancies that lawyers then need to clean up. We’re used to seeing perfect results, so we’ve had to teach people that you can’t just set the machine running and leave it alone,” said Isabel Parker, head of innovations at the firm.

    “I don’t think it will ever be a standalone product. It’s a tool to be used to enhance our productivity, rather than replace individuals.”

    The system is built to learn any Latin script, and Freshfields’ lawyers are now teaching it to work on other languages. “I think our lawyers are becoming more and more used to it as they understand its possibilities,” she added.

    Insurers are also spending heavily on big data fed by new products such as telematics, which track a customer’s driving style in minute detail, to help give a fair price to each customer. “The main driver of this is the customer experience,” said Darren Price, group chief information officer at RSA.

    The insurer is keeping its technology work largely in-house, unlike rival Aviva, which has made much of its partnerships with start-up companies in its “digital garage”. Allianz recently acquired the robo-adviser Moneyfarm, and Axa’s venture fund has invested in a chat-robot named Gasolead.

    EY, the professional services firm, is also investing in analytics tools that can flag red flags for its clients in particular countries or businesses, enabling managers to react before an accounting problem spreads.

    Even the Financial Conduct Authority is getting in on the act. Having given its blessing to the insurance sector’s use of big data, it is also experimenting with a “sandbox”, or a digital safe space where their tech experts and outside start-ups can use real-life data to play with new ideas.

    The advances that catch on throughout the financial world could create a more efficient industry – and with that tends to come job cuts. The Bank of England warned a year ago that as many as 15m UK jobs were at risk from smart machines, with sales staff and accountants especially vulnerable.

    “Financial services are playing catch-up compared to some of the retail-focused businesses. They are having to do so rapidly, partly due to client demand but also because there are new challengers and disruptors in the industry,” said Amanda Foster, head of financial services at the recruiter Russell Reynolds Associates.

    But City firms, for all their cost pressures, are not ready to replace their fund managers with robots, she said. “There’s still the art of making an investment decision, but it’s about using analytics and data to inform those decisions.”

    Source: Telegraph.co.uk, October 8, 2016



  • The challenge AI creates for IT and business leaders

    The challenge AI creates for IT and business leaders

    Artificial intelligence (AI) and AI-augmented data analytics have captured the imagination of everyone from kindergarten to the boardroom, as they change the ways we live, shop, consume news, and govern ourselves. From an IT-centric viewpoint these technologies are changing our business models. They’re also creating fierce competition to retain the limited number of people with the skills needed to transform AI into competitive advantage. IT and business leaders across industries all face the same challenge: how to close the skills gap that’s been created by advancements in AI and data analytics.

    Overcoming this challenge is essential to compete in today’s AI-enabled and disruption-obsessed tech environment. There is a wealth of technology platforms and resources available to businesses to become more data-driven and competitive. To reap technology’s full benefits, though, IT leaders need to reskill their staffs and attract top talent that are equipped with the right data skills and mindsets. Achieving this won’t happen overnight; it  will require support and investment from senior leadership. The strategy outlined below will increase the likelihood of these efforts succeeding.

    Obtain senior management buy-in before proceeding

    Making major changes in an organization requires the support of senior management because projects of this magnitude will have significant business, staffing and budget implications. IT professionals should build their cases for change from a business, not a technology, vantage point. They need to focus on how their plans will create competitive advantage by reducing lost opportunity costs, improving the success rate of new development projects, and enabling new business models.

    First, there needs to be a clear link between IT spending and specific revenue streams. How will each IT dollar spent impact initiatives from various departments; whether it’s IT, marketing and sales, or HR and accounting. This encourages good user behavior by linking requests to costs and encourages management to ask questions like the following:

    • How much does an application outage cost per hour?
    • What does it cost to shrink an application’s RTO from 4 hours to 2 hours?
    • What are its effects on customer relationships, stock prices, revenues, etc.?
    • And, last but certainly not least, can that money be better spent elsewhere?

    Finally, IT needs to set realistic expectations with senior management regarding the difficulty of retraining and hiring staff, as well as developing and testing new capabilities. Many IT organizations that have provided infrastructure for decades often lack the skills needed to exploit data analytics to their fullest advantage. From a recruiting perspective, many still struggle with the process of creating job descriptions that align with the revised role of IT. The list of new job titles is long and often fuzzy, encompassing everything from Chief Data Officer to Cloud Engineer to IoT Architect. Investing in training and development for existing staff while also allocating resources to recruit for new roles can be a timely and costly investment. However, it’s an investment worth making when done wisely, helping to create a more competitive business model. IT needs to be ready to sell this into the C-suite or risk losing out on the data-driven economy and being outpaced by competitors.

    Use consultants

    Treat the need to reskill your staff with a sense of urgency. Your competitors are, so don’t pinch pennies. Consultants can shorten your time to market with new services built on data analytics and AI/ML by helping to identify missing skills and assist in creating job descriptions and profiles of ideal candidates. This profile should include technical skills and personality traits, education, certifications, prior work experience, and other factors such a willingness to work evenings or weekends, and career expectations.

    Competent consultants can also help you avoid products that do not fit your requirements by helping to assess functionality, scale, performance, ease of use, etc. In doing so, they help avoid pitfalls that their previous clients encountered as they leveraged their data and AI to a competitive advantage. They can also help you create a shortlist of possible solutions and identify technology and marketing trends that may indicate changes in your strategies.

    Build relationships with local colleges and universities

    Schools are redesigning their curriculums to satisfy the need for technical professionals with skills in data analytics, AI/ML, cyber-security, and helping users leverage these technologies into competitive advantage. The lofty salaries commanded by graduates with these skills means there is fierce competition for them as previously noted, so you want to be first in making them job offers. The best ways to gain access to them is by building relationships with department heads and individual professors, offering professors consulting engagements where they make good business sense, sponsoring research projects that align with your business needs, and establishing an intern program. Internships not only expose potential new hires to your company, they introduce AI-related skills to existing employees, which can help management identify those with the potential to grow into new roles.

    While providing critical business insights for significant competitive advantage, data analytics and AI/ML are providing CIOs and other technical leaders with opportunities to reskill their staffs and engage with a whole new generation of data-savvy candidates. It doesn’t stop at just training and recruitment though. Leaders need to invest in the right tools and technologies that empower their workforce to harness the full potential of data and AI. Done well, these projects will transform IT’s role within an organization from being a provider of infrastructure to being a source of competitive advantage. Since mastery of these technologies is not optional, now is a great time to start to start the process.

    Author: Stanley Zaffos

    Source: Insidebigdata

  • The differences in AI applications at the different phases of business growth

    The differences in AI applications at the different phases of business growth

    We see companies applying AI solutions differently, depending on their growth stage. Here are the challenges they face and the best practices at each stage.

    A growing number of companies are seeking to apply artificial intelligence (AI) solutions, whether they want to launch disruptive products or innovate the customer experience. No matter how business is approaching their strategy, they’ll need to label massive amounts of data, like text, images, audio, and/or video, to create training data for their machine learning (ML) models.

    Of course, AI isn’t developed with a one-size-fits-all approach. We find that companies apply different strategies based on their size and stage of growth. Over the past decade, we’ve seen companies leverage AI solutions and encounter challenges along the way, as they come to us for data labeling, or the data enrichment and annotation that is required for training, testing, and validating their initial ML models and for maintaining their models in production.

    • Startup companies tend to apply narrow AI to tackle specific problems in an industry where they have deep domain expertise. They typically lack data, especially labeled data that is primed and ready to be used for ML training. They may be challenged by choosing the right data annotation tools, and many lack the expertise or funding to build their own data labeling tools.
    • Growth-stage companies are using AI solutions to enhance customer experience and drive greater market share. They typically have a fair amount of data and domain expertise, and they may even have the capabilities to build or customize their own data labeling tool, although perhaps without features like robust workforce analytics. At this stage, navigating competing priorities can be a challenge, where technical resources can be easily stretched and operations staff can get dragged into performing low-value data tasks. The companies in this stage that are applying AI most effectively are those that are giving thoughtful consideration to their customers and missions, focusing on their core competencies, and offloading what makes sense to outside specialists.
    • Enterprise companies typically are using AI in one of two ways: incorporating AI into a product or using it to innovate business processes to generate better efficiency, productivity, or profit margins. Larger companies often have plenty of data and extensive in-house technical and data expertise. They are spending millions of dollars on data and AI, but siloed communication across products and departments can make it difficult to get a unified snapshot of the data landscape and where there are opportunities for AI to improve the business. In general, enterprise companies are not as advanced on the data maturity curve as they’d like to be.

    As companies of all sizes seek to apply AI solutions, the one component that is more important now than ever is the role people play in the process. Data preparation is a detailed, time-consuming task, so rather than using some of their most expensive resources, data scientists, a growing number of companies are using other in-house staff, freelancers, contractors and crowdsourcing to get this massive amount of data work done.

    Best practices for AI solutions implementation

    At the end of the day, it takes smart machines and skilled humans in the loop to ensure the high-quality data that performant AI models require. That’s a crucial dynamic when you consider some of the real-world challenges the technology is in a position to help solve. From the ability to identify counterfeit goods or reduce vulnerability to phishing attacks, to training autonomous vehicles with hardware upgrades that make them safer, it’s quality data that makes AI truly valuable.

    For companies that are looking to apply or develop AI solutions, here are a few best practices we’ve identified that can help ensure efficient, productive data operations: 

    • Secure executive support: Leadership is a key factor in success, and lack of leadership leads to 87% of data science projects failing to make it to market.
    • Incorporate data science early: Companies that consider data science and data engineering early in their process will see the most success.
    • Collaborate often: Direct access to and clear communication with the people who work with data makes it easier to adjust tools and process (e.g., guidelines, training, feedback loops), which can positively impact data quality and the overall success of an AI project.
    • Be prepared for surprises: Developing AI is iterative, and change is inevitable. Companies should consider their workforce and process thoughtfully to ensure each one can provide the flexibility and agility they will need to facilitate innovation quickly while maintaining accuracy along the way. When you realize you’re going to need more labeled data than planned, and quickly, it’s critical to have the right foundation for quality at greater levels of scale.

    AI requires a strategic combination of people, process and technology

    At any stage of growth, it’s important to understand how to strategically combine people, process, and tools to maximize data quality, optimize worker productivity and limit the need for costly re-work. Leveraging best practices from companies that work with data can put an organization in the best position for success as the AI market continues to grow and new opportunities emerge.

    Author: Paul Christianson

    Source: Dataconomy

  • The importance of ensuring AI has a positive impact on your organization

    The importance of ensuring AI has a positive impact on your organization

    Arijit Sengupta, founder and CEO of Aible, explains how AI is changing and why a single AI model is no longer smart business.

    There’s lots of buzz about artificial intelligence, but as Arijit Sengupta, founder and CEO of Aible, points out, “Everyone has heard a lot about AI, but the AI we’ve been hearing about is not the AI that delivers business impact.” Where is AI headed? Why is a single AI model no longer the right approach? How can your enterprise make the most of this technology?

    Arijit SenguptaAI needs to deliver context-specific recommendations at the moment a business user is making a decision. We’ve moved away from traditional analytics and BI, which looks backwards, to a forward-looking technology. That’s a fundamental shift.

    What one emerging technology are you most excited about and think has the greatest potential? What’s so special about this technology?

    Context-specific AI has the greatest potential to change business for the better. The first generation of AI was completely divorced from the context of the business. It didn’t take into account the unique cost-benefit tradeoffs and capacity constraints of an enterprise. Traditional AI assumed that all costs and benefits were equal, but in business, the benefit of a correct prediction is almost never equal to the cost of a wrong prediction.

    For example, what if the benefit of winning a deal is 100 times the cost of unnecessarily pursuing a deal? You might be willing to pursue and lose 99 deals for a single win. An AI that only finds 1 win in 100 tries would be very inaccurate based on model metrics, although it would boost your net revenue. That’s what you want from AI.

    The second generation of AI has a laser focus on the specific business reality of a company. As Forrester and other analysts have pointed out, AI that focuses on data science metrics such as model accuracy often doesn’t deliver business impact.

    What is the single biggest challenge enterprises face today? How do most enterprises respond (and is it working)?

    Solving the last-mile problem of AI is the single biggest business challenge facing companies today. Right now, most business managers don’t have a way to understand how a predictive model would impact their business. That’s a fundamentally different question than finding out what the AI has learned.

    Just because I tell you how a car works doesn’t mean you know how to drive a car. In fact, in order to drive a car, you often don’t need to know all of the details about how a car works. In the first generation of AI, we obsessed over explaining how the car works in great detail. That’s what was considered “explainable AI.”

    What we are shifting to now is the ability for businesses to understand how the car affects their lives. Enterprises need to know how the AI affects their business outcomes under different business scenarios. Without this knowledge, you can’t get AI adopted because you’re asking business owners to play Russian roulette. You’re not giving them the information they need to understand how a given AI model will affect their KPI. You’re just giving them a few models and telling them to hope for the best.

    Is there a new technology in data or analytics that is creating more challenges than most people realize? How should enterprises adjust their approach to it?

    Traditional AI built on model accuracy can actually be incredibly harmful to a business. AI that’s trained to optimize model accuracy is often very conservative, and that can put a business on a death spiral. A conservative model will tell you to go after fewer and fewer customers so you’re assured of closing almost every deal you pursue, but many times that means you end up leaving a lot of money on the table and slowly destroying your business. AI that maximizes accuracy at the expense of business impact is worse than useless - it destroys value.

    What initiative is your organization spending the most time/resources on today? In other words, what internal project(s) is your enterprise focused on so that your company (not your customers) benefit from your own data or business analytics?

    We’re an early-stage startup with a relatively small volume of data, but we believe in getting started with AI quickly rather than waiting to get a ton of data. What we first started doing is using AI to predict which customers were likely to go from a first contact to a first meeting and which were likely to click on an email.

    Over time, we’ve collected more data and been able to optimize our marketing spending across different channels and figure out exactly which customers to focus on. If we had waited until we had a lot of data to get started, we wouldn’t have progressed as far as we have. By getting started with AI quickly, we were able to improve our AI process much faster.

    Where do you see analytics and data management headed in 2020 and beyond? What’s just over the horizon that we haven’t heard much about yet?

    Everyone has heard a lot about AI, but the AI we’ve been hearing about is not the AI that delivers business impact. The AI we’ve been hearing about is the AI of labs that’s abstracted from business realities.

    What’s just over the horizon that people are beginning to wake up to is that to get business impact, you have to have a very different kind of AI. Creating a single AI model doesn’t make any sense because business realities constantly change. What you need to do is create a portfolio of AI models that are tuned to different business realities. You need a different model if your cost to pursue a customer goes up 10 percent or if your average deal size goes up 20 percent. If you create a portfolio of AI models, your business will be much more resilient to change - and the only thing you can count on in business is change.

    Can you describe your solution and the problem it solves for enterprises?

    Aible’s AI platform ensures business adoption by giving users tools tailored to their existing skills and needs. Aible overcomes the last-mile problem by enabling end users to customize models and see how they affect the business. Aible lets you get started quickly with the data you have by fully automating the machine learning process; team members can contribute their unique business insights to AI projects. Uniquely, Aible delivers dynamically balanced AI models so you always deploy the right model at the right time. Aible ensures data security by running in your secure AWS or Azure account or on premises and never sees your data or trained models.

    Author: James E. Powell

    Source: TDWI

  • The increasing impact of AI on cinemas

    The increasing impact of AI on cinemas

    Artificial Intelligence (AI) has become a giant in the tech industry and is transforming the workforce as we know it in all kinds of ways. Everything from transportation manufacturers to home appliance companies is using machine learning to streamline everyday activities.

    Less publicized is the movie theater industry, which is gaining ground that they previously lost to streaming services by using AI and similar technology like machine learning to their advantage. They have learned to adapt to this technology, and are revolutionizing their marketing to bring viewers into theater seats. As a matter of fact, there are a number of film studios that are experimenting with AI. Notably, movie theaters have taken this opportunity to physically bring people to their theaters rather than count them lost. The movie theater isn’t dead. It’s just transforming, and it’s doing so with the help of AI.

    Personalized advertisement

    A large appeal for the use of AI in movie marketing is personalization, which isn’t too surprising. AI really shines in analytics and compiling data about customer decisions and trends, so it’s natural that a giant industry like the film industry would utilize it to understand and communicate with their customers. The change from previous forms of movie marketing, however, is how exactly they reach those individuals.

    This concept begins with personalized advertisements. The movie advertisements you get on streaming services are being sent to you personally because AI has determined you will enjoy the movie in question. Furthermore, AI will be directing ads with price incentives for movies or concessions at customers based on how likely they are to see a certain film.

    “Giving movie-goers the opportunity to buy a ticket in advance for them and three friends might be the best way to go,"Movio Chief Executive Will Palmer told Indiewire. "On the other end of the spectrum, you might have this ‘least likely’ group, and you’ve got to make a decision: do I leave that group alone or do I activate that group? That might be a case of putting some form of price-based incentive or concession-based incentive to try to attract that group."

    The idea is that people can buy tickets in advance, as well as concessions. They will be offered discounts when they’re promoted movies that AI thinks they will enjoy based on past experience and purchases. These things are all offered on an individual basis, and tailored specifically to individuals due to data gathered by AI analytics.

    Customer service

    AI help desks and virtual assistants are being used in several industries that depend on customer care for their income and revenue. But these same bots are also beginning to allow people to order concessions before they even get to the cinema. Imagine how this might change the movie-going experience.

    For instance, think about all the times you have waited in line for popcorn or drinks, and how the person ahead of you may not have known what they wanted. If you’ve ever been late to a movie due to prior responsibilities, this is particularly frustrating. But imagine if you could order that food with your ticket. You could just walk up, grab your order, and head into the movie without waiting for people to make up their minds. This makes the movie-going experience much more efficient, with less waiting and better delivery.

    Additionally, some apps are teaming up with movie theaters to replace the app MoviePass, an app service in which moviegoers paid $10 for a month and were able to see one movie a day for the entire month with no extra charge. Unfortunately, this ended up being unaffordable. And while Moviepass still exists, it’s much more selective with which theaters and movies it works with.

    Some developers have been working to create similar apps with more practical operations and replacing the ticket-buying process altogether. Regarding an app called Sinemia, The Verge summarized that: “What the app offers is access to any movie at any time with no blackouts and no theater restrictions whatsoever.” “Sinemia basically loads the funds onto your own personal debit card with the cash necessary to purchase your ticket, and then you’re good to go”, they added.

    Obviously, this is a fun way to get people who don’t normally go to movies to see a few each month, raising ticket sales, and it could make things easier on the theaters as well. Ticket purchasing and payments are totally streamlined with Sinemia. We will probably see more of this in the future.

    Is AI good or bad for the movie industry?

    AI is causing a lot of public concern because of the fear it’s rendering jobs previously done by humans as useless. Take those at the ticket booth for instance. With some of the aforementioned apps, they could be out of the job. However, AI may be the film industry’s only chance to adapt and survive in the future. In addition to that, right now it looks like AI is actually creating jobs as opposed to killing them. This means that employees and employers have to be able to adapt their skill sets into the new context, which some do not know how to do.

    From that angle, AI is good for the movie industry. As Fast Company reported, technology like AI is literally helping design storylines and is being used to monitor which movies evoke which emotions in viewers. By using AI’s data to monitor viewer responses, films are being catered better to consumers.

    In fact, theater and acting are using AI to move into the future in general. As we already know, AI has been making an appearance in traditional acting experiences as well, making interactive theater art pieces a popular experience. So AI isn’t killing the film and entertainment industries, it’s saving them. And the human element doesn’t have to be removed if humans learn how to use it. The movie-going experience is improving and will continue to thrive if those in command keep using new technology to their benefit. Don’t think of AI as the enemy, think of it as a tool we can use to enjoy movies in different (more efficient) ways.

    Source: Datafloq

  • The increasing role of AI in regulation

    The increasing role of AI in regulation

    How the Biden administration will change the AI playing field, and what you should be doing now.

    With President Biden having made some important appointments recently, there’s a lot of speculation about what we can expect from his administration over the course of the next four years with respect to AI/ML and in particular, with regulating Artificial Intelligence applications — to make the technology safer, fairer and more equitable.

    As an analyst covering this space at Info-Tech Research Group, I’m naturally going to throw my hat into the ring. Here are my top four predictions.

    Regulation of AI will be fast-tracked through the House and Senate

    We may not have all the details yet, but the direction and pace are both fairly clear: we can expect regulation to be fast-tracked at the federal level to complement state-level bills. The roadmap includes both recently introduced bills like Algorithmic Accountability Act of 2019 as well as the modernization of existing statutes such as the Civil Rights Act (1964), Fair Housing (1968), and others to cover AI and algorithmic decision-making systems.

    In fact, the driving force behind the Algorithmic Accountability Act — Senators Ron Wyden and Cory Booker, and Representative Yvette Clark — are planning to reintroduce their bills in the Senate and House this year.

    Altogether, we can expect to see the administration pursue an agenda that better incorporates AI/ML into existing and new legislative frameworks, and also leaves enough room for flexibility as AI standards and practices continue evolving.

    Ethical AI standards will be developed quickly

    For regulation to be effective it needs to be driven by values, informed by evidence, grounded in a sound risk model, and supported by standards and certifications. So we expect that government agencies will soon sharpen their focus on AI as the administration’s guidance takes shape. NIST and others will double down on developing benchmarks, standards and measurement frameworks for AI technologies, algorithmic bias, explainability, and AI governance and risk management.

    Some of this work is already in progress, for example Facial Recognition Vendor Test and Explainable AI, but we can expect this plan to accelerate fairly quickly.

    Regulators will be collaborating across borders

    In this interconnected world, any regulation of technology cannot be pursued in isolation, especially with technologies such as AI/ML. There ar several signs of lawmakers willing to join forces and learn from each other, especially from nations who made it a priority early on. (After all, when done right, regulation is not an impediment to innovation — more on this below.)

    Over the next four years, we will see increased collaboration on AI regulation, standards, certification, and auditing with European and international organizations, and with neighboring countries, many of whom are already ahead of their U.S. counterparts. Higher levels of global partnership will positively impact efforts to build a more comprehensive legislative framework, both in the U.S. and abroad.

    Federal agencies will get broader mandates that include AI/ML

    A law can tell you what you can or cannot do, but its power comes from being enforced by the courts and by oversight agencies with authority to impose penalties and other regulatory sanctions. At this time, it is unclear what this authority is and how it is divided among the various federal agencies relative to AI/ML.

    We expect this situation to be addressed fairly quickly by broadening of the mandates of existing oversight bodies to include Machine Learning and AI-powered applications and systems, as well as directives to create training, certification, accreditation, and oversight of AI auditors — especially AI bias auditors — similar to food inspectors and consumer safety inspectors.

    What does all this mean for your organization?

    So, what are the implications for your organization, whether you are just thinking about leveraging AI/ML or having been doing it for years?

    My opinion is that regulation — and its flip side, governance — are not evil. When executed properly, regulation creates certainty, establishes a level-playing field, and promotes competition. It also informs internal policy, governance, and accountability. And governance helps to frame the discussion about acceptable risks and rewards from monetizing AI — improving the organization’s odds of success.

    Governance (and hence regulation) also help to establish and strengthen trust: internally within the organization, but, most importantly, with its customers. Indeed, trust is the foundation of all business.

    You can get ahead of any impending regulatory shifts

    Don’t wait until regulation becomes a reality! There are three easy steps you can take to avoid surprises down the road and to prepare your organization:

    1. Don’t wait for AI regulation to come to you! Engage in shaping it through industry associations, think tanks, public policy and civic interest groups, and your House representatives.
    2. Actively govern your organization’s AI-powered applications to establish your process maturity before everyone else — including government — catches up. Business simply can’t afford to wait. Or face the risk of deploying a biased system that could harm your customers and, as a result, your reputation and balance sheet.
    3. Document and proactively disclose how and where you use AI/Machine Learning, data and analytics, and how these systems are built. AI registers — as leveraged by the cities of Amsterdam and Helsinki, for example — are a straightforward way to share this information with your customers and to increase their trust and loyalty. They will also work for auditors and regulators. And they create the foundation of a minimal viable framework for internal AI governance.

    Governance and regulation are truly not a burden. And even if they cost money, they represent an important, value-added investment in business success. Governance is a mechanism to create value, monetize new technologies like AI, and grow and strengthen the business (while monitoring and mitigating risks). The greater risk lies in ignoring the potential of AI, or in allowing competitors to get there first.

    Author: Natalia Modjeska

    Source: Towards Data Science

  • The massive impact of data science on the web development business

    The massive impact of data science on the web development business

    “A billion hours ago, modern Homo sapiens emerged.
    A billion minutes ago, Christianity began.
    A billion seconds ago, the IBM personal computer was released.
    A billion Google searches ago… was this morning."

    - Hal Varian, Google’s Chief Economist, December, 2013 (From the book: Work Rules by Laszlo Buck)

    The last line of the above quote characterizes the world’s hunger for information. Information plays a huge role in our life. Information consumed by our senses helps our mind in making decisions. But what happens when the mind is flooded with information? You get confused, annoyed and scared of decision-making. This is where your computers and processors come to rescue, and this is when the term 'information' is replaced by 'data'.

    Every minute, more than a hundred hours of video content is uploaded on YouTube. From application stores, over 50 billion apps have already been downloaded since 2008. There are more than 2 billion people signed up on social media websites. These numbers are just giving you a glimpse of the amount of data which is flowing through the optical fibers every second around the world. And now the question comes: how to make this massive amount of data useful? The answer is analytics. If you know how to play with numbers and extract the nectar of useful insights from this huge amount of data using appropriate analytical tools, then you are my friend, are a real data scientist.

    Data science is helping many businesses, irrespective of them being B2B or B2C. But in this article, we are going to talk more about its role in one of the biggest B2B industries: Custom Web Development. If you are a web developer, you must not ignore the rise of data science in your profession, and if you are thinking about hiring one, then you should know about the latest trends to supervise the development process in a better way. So, let’s discuss the impact of data science in the transformation of web development:

    1. Re(de)fining the software solutions

    Not a very long time ago, web developers used to be creative with page layouts and menu details. It was generally guess-work, but now data science tells web developers about the layouts and details of the competitor websites. Hence, they can propose a unique design after carefully evaluating the competition.

    Also with the help of the latest analytical tools, web developers can know what the requirements of the end users are. They can suggest particular functions or features which are popular among the customers based on the analysis of consumer data. In this way, data science is assisting the developers in providing better and faster software solutions to their clients.

    2. Automatic updates

    Gone are the days when updates had to be manually administered by the developers. This is the era of automation. Machine learning has enabled tools to analyze consumer behavior and data available on social media platforms to come up with required updates. The websites are made self-learning so that they can improve themselves with the changing demands of the customers. It is possible only because data science is doing its job perfectly.

    Although this part is still facing some challenges with creating customized solutions for different clients, but soon custom web development services will make it a piece of cake with the help of data science.

    3. Customizing for end users

    We have discussed until now that how web development can be customized for the clients using data science, but the real goal should be the satisfaction of end users. And satisfaction is a dependent variable of personalization. To create a personalized product for the users, you need to know them, and in this regard data science is helping web developers.

    The spending habits, interest areas, preferred websites, geographical location, age, and gender, etc. all this information of the end users are used to create algorithmic models which can predict the consumer’s alignment towards your web apps. Using these models, you can not only give the user a personalized experience on the website but also strategically place your ads targeting specific customer segments, thus, creating a win-win situation for both buyer and seller.

    4. Changing hot-skills

    Apart from changing the way the web is being designed by developers, data science is influencing the transformation of web development in one more way: by revolutionizing the job market. With ever-changing needs of the industry, a web development company wants employees equipped with the skills of using the latest data and analytics tools.

    The developers looking for jobs today are expected to have knowledge of tools like python and google analytics. They are asked about their proficiency in creating AI and ML programs in their interviews. Therefore, one has to stay updated to stay relevant.

    5. Customer’s expectations

    Do you get irritated when the Uber’s driver calls you to ask about your pick-up location when it can be easily tracked by the GPS and clearly displayed on his device’s screen? Won’t you feel uncomfortable if you misspell something while typing on your messenger and autocorrect stops helping? And don’t you feel nice when you buy a phone online and the web app suggests your latest phone covers for it?

    Well, if the answer is yes, then you are becoming dependent on data science too. Don’t worr, you're not the only one. Customers worldwide like extra help provided by businesses. And this dependency on data will soon make the use of data science a hygiene factor in web development.


    Although it’s called Data Science, using it is nothing less than an art. It requires expertise and dedication to develop a web app which completely harnesses the potential of data science.

    Data science is a vast field. It is responsible for AI, machine learning, big data, analytics, etc. This also drives technologies such as the Internet of Things and AR/VR. Hence, when all the modern buzzwords of business are somewhere related to data science, it requiress absolute ignorance to neglect the role of data science in the development of websites and web apps.

    Source: Datafloq

  • The most wanted skills related for organizations migrating to the cloud

    The most wanted skills related for organizations migrating to the cloud

    Given the widespread move to cloud services underway today, it’s not surprising that there’s growing demand for a variety of cloud-related skills.

    Earlier this year, IT consulting and talent services firm Akraya Inc. compiled a list of the most in-demand cloud skills for 2019, let's take a look at them:

    Cloud security

    Cloud security is a shared responsibility between cloud providers and their customers. That creates a need for professionals with specialization in cloud security skills, including those who can leverage cloud security tools.

    Machine learning (ML) and artificial intelligence (AI)

    In recent years cloud vendors have developed and expanded their set of tools and services that allow organizations to reap the benefits of machine learning and artificial intelligence in the cloud. Companies need people who can leverage these new capabilities of the cloud.

    Cloud migration and deployment within multi-cloud environments

    Many organizations are looking to adopt multiple cloud services, and are looking for professionals who can contribute to their cloud migration efforts. Cloud migration has its risks and is not an easy process, and improper migration processes often lead to business downtime and data vulnerability. This means that employees with appropriate skillset are key.

    Serverless architecture

    Underlying cloud server infrastructure needs to be managed by cloud developers within a server-based architecture. But today’s cloud consists of industry standard technologies and programming languages that help move serverless applications from one cloud vendor to another, Akraya said. Companies need expertise in serverless application development.

    Author: Bob Violino

    Source: Information-management

  • The reinforcing relationship between AI and predictive analytics

    The reinforcing relationship between AI and predictive analytics

    Enterprises have long seen the value of predictive analytics, but now that AI (artificial intelligence) is starting to influence forecasting tools, the benefits may start to go even deeper.

    Through machine learning models, companies in retail, insurance, energy, meteorology, marketing, healthcare and other industries are seeing the benefits of predictive analytics tools. With these tools, companies can predict customer behavior, foresee equipment failure, improve forecasting, identify and select the best product fit for customers, and improve data matching, among other things.

    Enterprises of all sizes are now finding that the combination of predictive analytics and AI can help them stay ahead of their competitors.

    Forecasting gets a boost with AI

    Retail brands are constantly looking to stay relevant by associating themselves with the latest trends. Before each season, designers are continuously working on creating new styles and designs they think will be successful. However, these predictions can be faulty based on a number of factors, such as changes in customer buying patterns, changing tastes in particular colors or styles, and other factors that are difficult to predict.

    AI-based approaches to demand projection can reduce forecasting errors by up to 50%, according to Business of Fashion. This improvement can mean big savings for a retail brand's bottom line and positive ROI for organizations that are inventory-sensitive.

    Another industry that has seen tremendous improvements recently is meteorology and weather forecasting. Traditionally, weather forecasting has been prone to error. However, that is changing, as the accuracy of 5-day forecasts and hurricane tracking forecasts has improved dramatically in recent years.

    According to the Weather Channel, hurricane track forecasts are now more accurate five days in advance than two-day forecasts were in 1992. These extra few days can give people in a hurricane's path extra time to prepare and evacuate, potentially saving lives.

    Another example is the use of predictive analytics by utility companies to help spot trends in energy usage. Smart meters monitor activity and notify customers of consumption spikes at certain times of the day, helping them cut back on power usage. Utility companies are also helping customers predict when they might get a high bill based of a variety of data points and can send out alerts to warn customers if they are running up a large bill that month.

    Reducing downtime and disturbance

    For industries that heavily rely on equipment, such as manufacturing, agriculture, energy, mining etc., unexpected downtime can be costly. Companies are increasingly using predictive analytics and AI systems to help detect and prevent failures.

    AI-enabled predictive maintenance systems can self-monitor and report equipment issues in real time. IoT sensors attached to critical equipment can gather real-time data, spotting issues or potential problems as they arise and notifying teams so they can respond to them right away. The systems can also formulate predictions of upcoming issues, reducing costly unplanned downtime for instance.

    Power plants need to be monitored constantly to make sure they are functioning properly and safely, maing sure they are providing energy to the all the customers that rely on them for electricity. Predictive analytics is being used to help run early warning systems that can identify anomalies and notify managers of issues weeks to months earlier than traditional warning systems. This can lead to improved maintenance planning and more efficient prioritization of maintenance activities.

    Additionally, AI can help predict when a component or piece of equipment might fail, reducing unexpected equipment failure and unplanned downtime while also lowering maintenance costs.

    In industries which rely heavily on location data, such as mining, making sure you're operating in the correct area is paramount. Goldcorp, one of the largest gold mining companies in the world, partnered with IBM Watson to improve its targeting of new deposits of gold.

    By analyzing previously collected data, IBM Watson was able to improve geologists' accuracy of finding new gold deposits. Through the use of predictive analytics, the company was able to gather new information from existing data, better determine specific areas to explore next, and reach high-value exploration targets faster.

    Increased situational awareness

    Predictive analytics and AI are also great at anticipating situational events by collecting data from the environment and making decisions based on that data. This system is helping to predict future events based on data rather than just reacting to current data.

    Brands need to stay on top of their online presence, as well as what's being said about them on social media. Tracking social media to get real-time feedback from customers is important, especially for retail brands and restaurants. Bad reviews and negative comments can be detrimental, particularly for smaller brands.

    With this awareness and by tracking comments on social media in (near) real-time, companies can gather immediate feedback and respond to situations quickly. Situational awareness can also help with competition tracking, market awareness, market trend predictions and anticipated geopolitical problems.

    With companies of all sizes in every industry trying to stay ahead of their competitors and predict market trends, this forward-looking approach of predictive analytics is proving valuable. Predictive analytics is such a core part of AI application development that it is one of the core seven patterns of AI identified by AI market research and analysis firm Cognilytica.

    The use of machine learning to help give humans more data to make better decisions is compelling, and it's one of the most beneficial uses of machine learning technology.

    Author: Kathleen Walch

    Source: TechTarget

  • The status of AI in European businesses

    The status of AI in European businesses

    What is the future of AI (artificial intelligence) in Europe and what does it take to build an AI solution that is attractive to investors and customers at the same time? How do we reimagine the battle of 'AI vs Human Creativity' in Europe? 

    Is there any company that is not using AI or isn’t AI-enabled in some way? Whether it is startups or corporates, it is no news that AI is boosting digital transformation across industries at a global level and hence it has traction not only from investors but is also the focus of government initiatives across countries. But where does Europe stand with the US and China in terms of digitization and how collective effort could push AI as an important pan-European strategic topic? 

    First things first: According to McKinsey, the potential of Europe to deliver on AI and catch up against the most AI-ready countries such as the United States and emerging leaders like China is large. If Europe on average develops and diffuses AI according to its current assets and digital position relative to the world, it could add some €2.7 trillion, or 20%, to its combined economic output by 2030. If Europe was to catch up with the US AI frontier, a total of €3.6 trillion could be added to collective GDP in this period.

    What comprises the AI landscape and is it too crowded?

    I recently attended a dedicated panel on 'AI vs Human Creativity' as a part of the first day of the Noah conference 2019 in Berlin.  Moderated by Pamela Spence, Partner of Global Life Sciences, Industry leader EY, the discussion started with an open question on whether the AI landscape is too crowded? According to a report by EY, there are currently about 14,000 startups globally which can be associated with the AI landscape. But what does this mean when it comes to the nature of these startups? 

    Minoo Zarbafi, VP of Bertelsmann Investments Digital Partnerships, added perspective to these numbers: 'There are companies that are AI-enabled and then there are so-called AI-first companies. I differentiate because there are almost no companies today that are not using AI in their processes. From an investor perspective, we at Bertelsmann like AI-first companies which are offering a B2B (business-to-business platform solution to an unsolved problem. For instance, we invested in China in two pioneer companies in the domain of computer vision that are offering a B2B solution for autonomous driving'. Minoo added that from a partnership perspective Bertelsmann looks at AI companies that can help on the digital transformation journey of the company. 'The challenge is to find the right partner with the right approach for our use cases. And we actively seek the support of European and particularly German companies from the startup ecosystem when selecting our partners', she pointed out. 

    The McKinsey report too states that one positive point to note is that Europe may not need to compete head to head but rather in areas where it has an edge (such as in B2B and advanced robotics) and continue to scale up one of the world’s largest bases of technology developers into a more connected Europe-wide web of AI-based innovation hubs.

    Growing share of funding from Series A and beyond reflect increased maturity of the AI ecosystem in Europe. Pamela Spence from EY noted: 'One in 12 startups uses AI as a part of their product or services, up from 50 about six years ago. Startups labelled as being in AI attract up to 50% more funding than other technology firms. 40% of European startups that are claimed as AI companies actually don’t use AI in a way that is material to their business'.

    AI and human creativity go hand-in-hand

    Another interesting and important question is how far are we from the paradigm of clever thinking machines? Why should we be afraid of machines? Hans-Christian Boos, CEO & Founder of Arago, compares how machines were earlier supposed to do tasks which are too tedious or expensive and complex for humans. 'The principle of machine changes with AI. It used to earlier just automate tasks or standardise them. Now, all you need is to describe what you want as an outcome and the machine will find that outcome for you, that is a different ballgame altogether. Everything is result-oriented', he says.

    Minoo Zarbafi adds that as human beings, we have a limited capacity for processing information. 'With the help of AI, you can now digest much more information which may, combined with human creativity, cause you to find innovative solutions that you could not see before. One could say, the more complexity, the better the execution with AI. At Bertelsmann, our organisation is decentralised and it will be interesting to see how AI leverages operational execution'.  

    AI and the political landscape

    Why discuss AI when we talk about the digital revolution in Europe? According to the tech.eu report titled ‘Seed the Future:  A Deep Dive into European Early-Stage Tech Startup Activity’, it would be safe to say that Artificial Intelligence, Machine Learning and Blockchain lead the way in Europe. The European Commission has identified Artificial Intelligence as an area of strategic importance for the digital economy, citing it’s cross-cutting applications to robotics, cognitive systems and big data analytics. In an effort to support this, the Commission’s Horizon 2020 funding includes considerable funding AI, allocating €700M EU funding specifically.

    Chiara Sommer, Investment Director of Intel Capital, reflected on this by saying: 'In the present scenario, the implementation of AI starts with workforce automation with a focus on how companies could reduce cost and become more efficient. The second generation of AI companies focuses on how products can offer solutions and solve problems like never before. There are entire departments can be replaced by AI. Having said that, the IT industry adopts AI fastest, and then you have industries like healthcare, retail, a financial sector that follow'. 

    Why are some companies absorbing AI technologies while most others are not? Among the factors that stand out are their existing digital tools and capabilities and whether their workforce has the right skills to interact with AI and machines. Only 23% of European firms report that AI diffusion is independent of both previous digital technologies and the capabilities required to operate with those digital technologies; 64% report that AI adoption must be tied to digital capabilities, and 58% to digital tools. McKinsey reports that the two biggest barriers to AI adoption in European companies are linked to having the right workforce in place.

    It is certainly a collective effort of industries, the government, policy makers, corporates to have effective and impactful use of AI. Instead of asking how AI will change society Hans-Christian Boos rightly concludes: 'We should change the society to change AI'.

    Author: Diksha Dutta

    Source: Dataconomy

  • The three key challenges that could derail your artificaiI intelligence project

    BrainChip650It’s been abundantly clear for a while that in 2017, artificial intelligence (AI) is going to be front and center of vendor marketing as well as enterprise interest. Not that AI is new – it’s been around for decades as a computer science discipline. What’s different now is that advances in technology have made it possible for companies ranging from search engine providers to camera and smartphone manufacturers to deliver AI-enabled products and services, many of which have become an integral part of many people’s daily lives. More than that, those same AI techniques and building blocks are increasingly available for enterprises to leverage in their own products and services without needing to bring on board AI experts, a breed that’s rare and expensive.

    Sentient systems capable of true cognition remain a dream for the future.  But AI today can help organizations transform everything from operations to the customer experience. The winners will be those who not only understand the true potential of AI but are also keenly aware of what’s needed to deploy a performant AI-based system that minimizes rather than creates risk and doesn’t result in unflattering headlines.

    These are the three key challenges all AI projects must tackle:

    • Underestimating the time and effort it takes to get an AI-powered system up and running. Even if the components are available out of the box, systems still need to be trained and fine-tuned. Depending on the exact use case and requirements for accuracy, it can be anything between a few hours and a couple of years to have a new system up and running. That’s assuming you have a well-curated data set available; if you don’t, that’s another challenge.
    • AI systems are only as good as the people that program them and the data they feed them. It's also people who decide to what degree to rely on the AI system and when to apply human expertise. Ignoring this principle will have unintended, likely negative consequences and could even be the determinant between life and death. These are not idle warnings: We’ve already seen a number of well-publicized cases where training bias ended up discriminating against entire population groups, or image recognition software turned out to be racist; and yes, lives have already been put at risk by badly trained AI programs. Lastly, there’s the law of unintended consequences: people developing AI systems tend to focus on how they want the system to work, but not how somebody with criminal or mischievous intent could subvert it.
    • Ignore legal, regulatory and ethical implications at your peril. For example, you're at risk of breaking the law if the models you run take into consideration factors that mustn't be used as the basis for certain decisions (e.g., race, sex). Or you could find yourself with a compliance breach if you’re under obligation to provide an exact audit trail of how a decision was arrived at, but where neither the software nor its developers can explain how the result came about. A lot of grey areas surround the use of predictions when making decisions about individuals; these require executive level discussions and decisions, as does the thorny issue of dual-use.

    Source: forrestor.com, January 9, 2017

  • The uses of artificial intelligence when managing legal contracts

    The uses of artificial intelligence when managing legal contracts

    Legal contracts constitute the foundation upon which the dynamic processes that empower organizations and their interactions with each other transpire in the world of commerce. Contracts move through different stages from genesis to execution and expiry. The material manifestation of the contract object is usually a document, which is manually processed, printed and signed to formalize the contract. Managing the documents as well as monitoring the compliance of the process to that outlined in the contents of the contract are onerous tasks and have many unseen gaps.

    Along with guiding the interaction in terms of obligations and entitlements, the contract objects encode the perceived risks and remedial strategies associated with risk management. Which makes contracts integral to directing the activities of many different parts of the company and requires visibility across the organization.

    Technology aids in management of this critical aspect of the business effectively by moving from manual material management to a digital process. A centralized repository for all the contracts of a company improves contract management efficiency and provides visibility into the risks and possible resolutions across the organization.

    By establishing a single source of truth for the rights and commitments a company holds in its contracts, contract management software greatly reduces the risk of missed deadlines, opportunities and penalties—or worse, litigation.

    Managing the lifecycle of complex contracts becomes simpler by the use of these platforms. But to truly improve business operations through contract management, organizations today are turning to even newer data-driven technologies that can help them make intelligent, proactive decisions that maximize their commercial relationships and opportunities.

    AI’s Role – Accuracy, speed, and insight

    Artificial intelligence (AI) driven contract intelligence can provide many insights that drive optimum performance throughout the life of a contract, from its initial negotiation and approval, through execution and performance to its completion and renewal.

    Consider the case of location data and platform company HERE Technologies. HERE was managing 70,000 contracts in non-digitized, unstructured forms which represented a wealth of data that could not be easily accessed, acted upon, or shared across the organization. But after implementing a contract management platform to digitize the legacy contracts whose AI engine helped the company to not only centralize its contract data, but to identify attributes and clauses, and semantically match them to the contract type and definitions that HERE had previously established. The power and accuracy of the intelligence derived from AI accelerated the entire digitization effort by months, empowering HERE’s employees to make better business decisions when managing their contracts. HERE’s sales teams gained deeper insights into past customer contracts, which allowed them to make proactive decisions on renewals, up-sell, and cross-sell, thereby increasing revenue and accelerating contract turnaround time.

    These AI engines convert contracts converted from unstructured documents to analyzable digital assets in a matter of minutes.

    Why AI?

    These advanced technologies provide the insights needed to accelerate commerce, mitigate risk, preserve cash flow and negotiate optimal future contracts while boosting productivity and reducing  operational costs by using fewer resources. It can also reduce time spent manually authoring and reviewing lengthy contracts and ensure compliance. More organizations are poised to adopt this technology to benefit from contract best practices across the organization and speed up the negotiation process by providing the right insights at the right time.         

    While there are many positives, it is important to recognize that AI is not a panacea. Because of its inherent automation, AI will inevitably change the content of contracts to make them consistent organization-wide. This will ultimately impact the overall contracting process. It will influence the current roles of people responsible for, or involved in, creating, executing and overseeing contracts. Organizations may expect to be able to reduce their resource requirements, but rather than eliminating positions, roles will likely shift or transform. This allows teams to focus on strategic functions such as building relationships with prospects or providing counsel.

    Setting the course

    An effective federation of filtered information is necessary to extract useful intelligence that can feedback into process optimization. A data driven approach to understand current limitations, demonstrate results, and align any necessary training or organizational changes forms the core of a successful adoption strategy. Done right, AI for contract management has the potential to empower organizations to stay out front by turning repositories of contracts into indispensable strategic advantages.

    Author: Sunu Engineer

    Source: Insidebigdata

  • Three variants of machine learning fortifying AI deployments

    Three variants of machine learning fortifying AI deployments

    Although machine learning is an integral component of AI (Artificial Intelligence), it’s critical to realize that it’s just one of the many dimensions of this collection of technologies. Expressions of supervised and unsupervised learning may be the foundation of many contemporary AI applications, but they’re substantially enhanced by interacting with other aspects of cognitive computing.

    Certain visual approaches of graph aware systems will significantly shape the form machine learning takes in the near future, exponentially increasing its value to the enterprise. Developments in topological data analysis, embedding, and reinforcement learning are not only rendering this technology more useful, but much more dependable for a broader array of use cases.

    Topological data analysis

    Topological data analysis is arguably at the vanguard of machine learning trends because of its fine-grained pattern analysis that supersedes that of traditional supervised or unsupervised learning. Although technically part of unsupervised learning, topological data analysis 'is a clustering technique where you get way better results', Aasman explained. Clustering is a visual analytics approach supported by graphs that reveal where data are populated according to certain segments. Aasman used a simple example to explain the effectiveness of topological data analysis: 'There’s five positions in basketball, but then if you analyze the players based on a set of features, you find that there’s like, 200 types of basketball players'.

    The advantage of this approach is the granularity in which it’s able to micro-segment datasets. Topological data analysis is useful for pinpointing the nuanced, myriad facets involved in constructing predictive digital twins to model entire production environments. In healthcare, it can indicate that instead of two forms of diabetes, there are over 20 distinguishable forms, 'in the sense of how they react to certain treatments or medications or their temporal unfolding', Aasman revealed. A highly pragmatic, horizontal deployment of topological techniques is for understanding machine learning model results for interpretability and explainability. For this use case, these representations can reveal the inner workings of deep neural networks to illustrate for which features models learned well and which they didn’t. SAS Senior Manager of AI and Machine Learning Research and Development Ilknur Kabul described those representations as essentially graphs.


    The visual manifestations of graph settings are pivotal for generating the features on which to train machine learning models. Features are directly responsible for the prediction accuracy of models. According to Cambridge Semantics CTO Sean Martin, engineering those features 'is a combination of deciding which facets of the data to use, and the transformation of those facets into something closer to a vector'. Vectors are simply data that have been converted into numbers, which become the basis for sophisticated equations for machine learning predictions so that 'if you’ve got X you can solve for Y', Martin maintained. Embedding is the process of plotting various vectors in a graph to perform this math to determine models’ features. It involves 'reducing the graph to these vector spaces that you can then look to see if you can find equations', Martin said. Graph embedding hinges on transforming vectors to decrease the amount of data plotted in graphs, while still including the full scope of that data for predictions.

    There are several ways embedding with graphs makes machine learning more effectual. Specifically, it improves value derived from:

    • Transformations: In graphs, organizations can preserve the relationships between vectors before and after transformations, allowing them to contextualize them better for feature detection. This benefit underpins 'a far less heavy lift to place those pivoting transformations on the data elements that you are finding important', Martin noted.
    • Multi-dimensional data: High dimensional data is oftentimes cumbersome because of the large number of features (and factors) involved. When creating models to predict whether patients will require respiratory assistance after hospitalization, for example, organizations have to include all of their demographic data, medical history data, that of their family, and more. Flexible, relationship-savvy graph settings are ideal for the math required to generate credible features; the higher the data’s dimensionality, the more features it offers for accurate predictions.
    • Vectors: As the number of vectors increases for feature generation, it becomes more crucial to consistently 'represent some sort of data point in juxtaposition with all of the other vectors…created', Martin commented. Graphs can visually represent, and maintain, the connections between vectors and data points that make them meaningful for feature engineering.

    Reinforcement learning

    In professional settings, reinforcement learning is likely the least used variety of machine learning. One of the caveats of deploying reinforcement learning pertains to how these statistical models learn. 'An agent interacts with an environment and learns how to interact with that environment', Kabul clarified. 'The agent can make many mistakes in that environment, but when applying it to the real world, we don’t have the luxury about making so many mistakes'. The primary distinction between reinforcement learning and more commonplace applications of supervised/unsupervised learning is the latter involve some annotated training data. Conversely, the learning in the former is predicated on what Kabul termed a 'sequential decision making process; we learn through sequentially interacting through the agent'.

    Enterprise applications of reinforcement learning include aspects of automated model building in self-service data science platforms. Kabul mentioned that other use cases include energy efficiency in smart cities. However, reinforcement learning’s penchant for individualization may exceed that of unsupervised and supervised learning for customer interactions, which could potentially revamp both marketing and sales verticals. Kabul referenced a marketing use case in which various materials are sent to customers to try to elicit (and optimize) responses: 'Traditionally you can segment the customers and [inter]act differently with those groups. But that’s not scalable; that’s not individualized. What we are trying to do is personalize those: create many journeys, create many interactions with the customer so that we can treat each one individually'.  

    Advanced machine learning

    Machine learning will assuredly continue to fortify AI deployments in both the public and private sectors, for consumers and the enterprise alike. As such, its advanced applications pertaining to wide data, topological data analysis, and reinforcement learning will have even greater sway over the underlying worth of this technology to business processes and personal life. How effectively organizations adapt to these applications and incorporate them into workflows will influence the overall effectiveness of their cognitive computing investments.  

    Author: Jelani Harper

    Source: Insidebigdata

  • Toegang tot RPA-bots in Azure dankzij Automation Anywhere

    Toegang tot RPA-bots in Azure dankzij Automation Anywhere

    Automation Anywhere heeft het mogelijk gemaakt om toegang te krijgen tot zijn Robotic Process Automation (RPA)-bots vanuit Azure. Het bedrijf stelt dat er een uitgebreide samenwerking is opgezet met Microsoft, die gezamenlijke productintegratie, gezamenlijke verkopen en gezamenlijke marketing mogelijk moet maken. 

    Automation Anywhere koos daarnaast voor Azure als zijn cloud-provider, waardoor gezamenlijke klanten altijd en overal toegang hebben tot automatiseringstechnologie, schrijft idm. Organisaties kunnen het RPA-platform van Automation Anywhere op Azure, on premise en in een public of private cloud hosten.

    Alysa Taylor, Corporate Vice President van Cloud and Business bij Microsoft Business Applications and Industry, stelt dat de visie van Automation Anywhere overeenkomt met die van Microsoft. 'Dat is de visie om data en intelligence in al onze producten, applicaties, diensten en ervaringen te stoppen'. Volgens Mihir Shukla, CEO en mede-oprichter van Automation Anywhere, stelt de samenwerking bedrijven in staat om efficiënter te worden, de kosten via automatisering te verlagen en om werknemers de kans te geven om te focussen op wat ze het beste doen.

    Microsoft gaat op zijn beurt de automatiseringsproducten van Automation Anywhere uitlichten in zijn Executive Briefing Centres wereldwijd. Daardoor kunnen klanten hands-on demonstraties krijgen van Microsoft-producten, die mogelijk worden gemaakt door Automation Anywhere-technologie.


    Automation Anywhere kondigde in april aan ook een samenwerking te hebben opgezet met Oracle Integration Cloud. De twee bedrijven willen intelligente automatisering versnellen en de adoptie van door kunstmatige intelligentie aangedreven software-bots in de Integration Cloud mogelijk maken.

    Met het RPA-platform van Automation Anywhere moet het voor klanten van de Oracle Integration Cloud mogelijk worden om complexe zakelijke processen te automatiseren, zodat de werknemers zich op werk met meer waarde kunnen focussen. Ook moet de organisatorische efficiëntie verhoogd worden.

    Als onderdeel van de samenwerking wordt de enterprise RPA-platform connector voor de Integration Cloud beschikbaar. Oracle-klanten krijgen verder toegang tot de software-bots van Automation Anywhere, en de twee bedrijven werken samen aan extra bot-creaties speciaal voor Oracle. Die bot-creaties moeten beschikbaar worden in de Automation Anywhere Bot Store.

    Auteeur: Eveline Meijer

    Bron: Techzine

  • Top 10 big data predictions for 2019

    The amount of data that created nowadays is incredible. The amount and importance of data is ever growing, and with that the need for analyzing and identifying patterns and trends in data becomes critical for businesses. Therefore, the need for big data analytics is higher than ever. That raises questions about the future of big data. ‘In which direction will the big data industry evolve?’ 'What are the dominant trends for big data in the future?' While there are several predictions doing the rounds, these are the top 10 big data predictions that will most likely dominate the (near) future of the big data industry:

    1. An increased demand for data scientists

    It is clear that with the growth of data, the demand for people capable of managing big data is also growing. Demand for data scientists, analysts and data management experts is on the rise. The gap between the demand and availability of people who are skilled in analyzing big data trends is big and keeps getting bigger. It is up to you to decide if you wish to hire offshore data scientists/data managersor hire an in-house team for your business.

    2. Businesses will prefer algorithms over software

    Businesses prefer purchasing existing algorithms over creating their own. It gives them more customization options compared to a situation where they buy software. Software cannot be modified as per user requirements, rather businesses have to adjust as per the software.

    3. Businesses increase investments in big data

    IDC analysts predict that the investment in big data and analytics will reach $187 billion in 2019. Even though the big data investment from one industry to the other will vary, spending as a whole will increase. It is predicted that the manufacturing industry will experience the highest investment in big data, followed by healthcare and the financial industry.

    4. Data security and privacy will be a growing concern

    Data security and privacy have been the biggest challenges in the big data and internet of things (IoT) industries. Since the volume of data started increasing exponentially, the privacy and security of data have become more complex and the need to maintain high-security standards is becoming extremely important. If there is something that will impede the growth of big data, it is data security and privacyconcerns.

    5. Machine learning will be of more importance for big data

    Machine learning will be of paramount importance regarding big data. One of the most important reasons why machine learning will be important for big data is that it can be of huge help in predictive analysis and addressing future challenges.

    6. The rise of predictive analytics

    Simply put, predictive analytics can predict the future more reliably with the help of big data analytics. It is a highly sophisticated and effective way to gather market and customer information to determine the next actions of both consumer and businesses. Analytics provide depth in the understanding of futuristic behaviour.

    7. Chief Data Officers will have a more important role

    As big data becomes important, the role of Chief Data Officers will increase. Chief Data Officers will be able to direct functional departments with the power of deeply analysed data and in-depth studies of trends.

    8. Artificial Intelligence will become more accessible

    Without going in detail about how Artificial Intelligence becomes significantly important for every industry, it is safe to say that big data is a major enabler of AI. Processing large amounts of data to derive trends for AI and machine learning is possible. With cloud-based data storage infrastructure, parallel processing of big data is possible. Big data will make AI more productive and more efficient.

    9. A surge in IoT networks

    Smart devices are dominating our lives like never before. There will be an increase in the use of IoT by businesses and that will only increase the amount of data that is being generated. In fact, the focus will be on introducing new devices that are capable of collecting and processing data as quickly as possible.

    10. Chatbots will get smarter

    Needless to say, chatbots come across a large part of daily online interaction. But chatbots are turning more and more intelligent and capable of personalized interactions. With the rise of AI, big data will enable tons of data to be processed and conversations can be analysed to draw a more streamlined strategy that is more customer-focused for chatbots to be smarter.

    Is your business ready for the future of big data analytics? Keep the above predictions in mind when preparing your business for emerging technologies and think about how big data can play a role.

    Source: Datafloq

  • Top artificial intelligence trends for 2020

    Top artificial intelligence trends for 2020

    Top AI trends for 2020 are increased automation to extend traditional RPA, deeper explainable AI with more natural language capacity, and better chips for AI on the edge.

    The AI trends 2020 landscape will be dominated by increasing automation, more explainable AI and natural language capabilities, better AI chips for AI on the edge, and more pairing of human workers with bots and other AI tools.

    AI trends 2020: increased automation

    In 2020, more organizations across many vertical industries will start automating their back-end processes with robotic process automation (RPA), or, if they are already using automation, increase the number of processes to automate.

    RPA is 'one of the areas where we are seeing the greatest amount of growth', said Mark Broome, chief data officer at Project Management Institute (PMI), a global nonprofit professional membership association for the project management profession.

    Citing a PMI report from summer 2019 that compiled survey data from 551 project managers, Broome said that now, some 21% of surveyed organizations have been affected by RPA. About 62% of those organizations expect RPA will have a moderate or high impact over the next few years.

    RPA is an older technology, organizations have used RPA for decades. It's starting to take off now, Broome said, partially because many enterprises are becoming aware of the technology.

    'It takes a long time for technologies to take hold, and it takes a while for people to even get trained on the technology', he said.

    Moreover, RPA is becoming more sophisticated, Broome said. Intelligent RPA or simply intelligent process automation (IPA), RPA infused with machine learning, is becoming popular, with major vendors such as Automation Anywhere and UiPath often touting their intelligent RPA products. With APIs and built-in capabilities, IPA enables users to more quickly and easily scale up their automation use cases or carry out more sophisticated tasks, such as automatically detecting objects on a screen, using technologies like optical character recognition (OCR) and natural language processing (NLP).

    Sheldon Fernandez, CEO of DarwinAI, an AI vendor focused on explainable AI, agreed that RPA platforms are becoming more sophisticated. More enterprises will start using RPA and IPA over the next few years, he said, but it will happen slowly.

    AI trends 2020: push toward explainable AI

    Even as AI and RPA become more sophisticated, there will be a bigger move toward more explainable AI.

    'You will see quite a bit of attention and technical work being done in the area of explainability across a number of verticals', Fernandez said.

    Users can expect two sets of effort behind explainable AI. First, vendors will make AI models more explainable for data scientists and technical users. Eventually, they will make models explainable to business users.

    Likely, technology vendors will move more to address problems of data bias as well, and to maintain more ethical AI practices.

    'As we head into 2020, we're seeing a debate emerge around the ethics and morality of AI that will grow into a highly contested topic in the coming year, as organizations seek new ways to remove bias in AI and establish ethical protocols in AI-driven decision-making', predicted Phani Nagarjuna, chief analytics officer at Sutherland, a process transformation vendor.

    AI trends 2020: natural language

    Furthermore, BI, analytics and AI platforms will likely get more natural language querying capabilities in 2020.

    NLP technology also will continue to evolve, predicted Sid Reddy, chief scientist and senior vice president at virtual assistant vendor Conversica.

    'Human language is complex, with hundreds of thousands of words, as well as constantly changing syntax, semantics and pragmatics and significant ambiguity that make understanding a challenge', Reddy said.

    'As part of the evolution of AI, NLP and deep learning will become very effective partners in processing and understanding language, as well as more clearly understanding its nuance and intent', he continued.

    Among the tech giants involved in AI, AWS for example, revealed Amazon Kendra in November 2019, an AI-driven search tool that will enable enterprise users to automatically index and search their business data. In 2020, enterprises can expect similar tools to be built into applications or sold as stand-alone products.

    More enterprises will deploy chatbots and conversational agents in 2020 as well, as the technology becomes cheaper, easier to deploy and more advanced. Organizations won't fully replace contact center employees with bots, however. Instead, they will pair human employees more effectively with bot workers, using bots to answer easy questions, while routing more difficult ones to their human counterparts.

    'There will be an increased emphasis in 2020 on human-machine collaboration', Fernandez said.

    AI trends 2020: better AI chips and AI at the edge

    To power all the enhanced machine learning and deep learning applications, better hardware is required. In 2020, enterprises can expect hardware that's specific to AI workloads, according to Fernandez.

    In the last few years, a number of vendors, including Intel and Google, released AI-specific chips and tensor processing units (TPUs). That will continue in 2020, as startups begin to enter the hardware space. Founded in 2016, the startup Cerebras, for example, unveiled a giant AI chip that made the news. The chip, the largest ever made, Cerebras claimed, is the size of a dinner plate and designed to power massive AI workloads. The vendor shipped some last year, with more expected to ship this year.

    While Cerebras may have created the largest chip in the world, 2020 will likely introduce smaller pieces of hardware as well, as more companies move to do AI at the edge.

    Max Versace, CEO and co-founder of neural network vendor Neurala, which specializes in AI technology for manufacturers, predicted that in 2020, many manufacturers will move toward the edge, and away from the cloud.

    'With AI and data becoming centralized, manufacturers are forced to pay massive fees to top cloud providers to access data that is keeping systems up and running', he said. 'As a result, new routes to training AI that can be deployed and refined at the edge will become more prevalent'.

    Author: Mark Labbe

    Source: TechTarget

  • United Nations CITO: Artificial intelligence will be humanity's final innovation

    uncybercrime2012genevaThe United Nations Chief Information Technology Officer spoke with TechRepublic about the future of cybersecurity, social media, and how to fix the internet and build global technology for social good.

    Artificial intelligence, said United Nations chief information technology officer Atefeh Riazi, might be the last innovation humans create.

    "The next innovations," said the cabinet-level diplomat during a recent interview at her office at UN headquarters in New York, "will come through artificial intelligence."

    From then on, said Riazi, "it will be the AI innovating. We need to think about our role as technologists and we need to think about the ramifications—positive and negative—and we need to transform ourselves as innovators."

    Appointed by Secretary General Ban Ki-moon as CITO and Assistant Secretary-General of the Office of Information and Communications Technology in 2013, Riazi is also an innovator in her own right in the global security community.

    Riazi was born in Iran, and is a veteran of the information technology industry. She has a degree in electrical engineering from Stony Brook University in New York, spent over 20 years working in IT roles in the public and private sectors, and was the New York City Housing Authority's Chief Information Officer from 2009 to 2013. She has also served as the executive director of CIOs Without Borders, a non-profit organization dedicated to using technology for the good of society—especially to support healthcare projects in the developing world.

    Riazi and her UN staff meet with diplomats and world leaders, NGOs, and executives at private companies like Google and Facebook to craft technology policy that impacts governments and businesses around the world.

    TechRepublic's in-depth interview with her covered a broad range of important technology policy issues, including the digital divide, e-waste, cybersecurity, social media, and, of course, artificial intelligence.

    The Digital Divide

    TechRepublic: Access to information is essential in modern life. Can you explain how running IT for the New York City Housing Authority helps low income people?

    UN CITO: When I was at New York City Housing, I came in as a CIO. The chairman had been a CIO and within six months most of the leadership left. He looked at me. I looked at him. The board looked at me. I knew to be nervous, and they said, "you're in. You're the next acting general manager of New York City Housing." I said, "Okay."

    New York City Housing is a $3 billion organization providing support to about 500,000 residents. You have the Section 8 program, you have the public housing, and a billion and a half of construction. I came out of IT and I had to help manage and run New York City Housing at a very difficult time.

    When you look at the city of New York, the digital divide among the youth and among the poor is very high. We have a digital divide right in this great city. Today I have two eight year olds and their homework. A lot of [their] research is done online. But in other areas of the city, you have kids that don't have access to computers, don't have access to the internet, cannot afford it. They can't find jobs because they don't have access to the internet. They can't do as well in school. A lot of them are single family, maybe grandparents raising them.

    How do we provide them that access? How do we close the gap so they can compete with other classmates who have access to knowledge and information?

    In Finland, they passed a law stating that internet access is a birthright. If it's a birthright, then let's give it to people right here in New York and elsewhere in the world.

    All of the simple things that we have and we offer our children, if we could [provide internet access] as a public service, we begin to close the income gap, help people learn skills, and make them more viable for jobs.


    TechRepublic: Can you help us understand the role of electronic waste (e-waste) on women and girls in developing countries?

    UN CITO: E-waste is the mercury and lead. Mercury and lead contributes to 5% of global waste. They contribute to 70% of hazardous materials. You have computers, servers, storage, and cell phones. We have no plans on recycling these. This is polluting the air and the water in China and India. Dioxin, if you burn electronics you get dioxin, which is like agent orange. The question to the tech sector is, okay, you created this wonderful world of technology, but you have no plans in addressing these big issues of environmental hazard.

    The impact of electronic waste is tremendous because women's body looks at mercury as calcium. It brings it in, it puts it in the bones and then when you're pregnant, guess what? It thinks, oh, "I got some calcium. Here it is."

    Newborns have mercury and lead in their blood, and disease. It's just contributing to so many children, so many women getting sick and because women pass it on to the next generation, [children] are impacted.

    Where is the responsibility of the tech sector to say, "I will protect the women. I will protect the children. I will take out the lead and mercury. I will help contribute to recycling of my materials."

    The Deep Web

    TechRepublic: While there are many privacy benefits to the Deep Web, it's no secret that criminal activity flourishes on underground sites. I know this is the perpetual question, but is this criminal behavior that has always existed and now we can see it a little better, or does the Deep Web perpetuate and increase criminal behavior?

    UN CITO: I wish I had enough insight to answer correctly, but I can give it from my perspective. The scope has changed tremendously. If you look at slavery and the number of people trafficked, there's 200 million people trafficked now. You look at the numbers and you look at how much the slaves were sold [in the past]. I think the slaves were sold for [hundreds] of... today's dollars. Today, you can buy a girl for $300 through the Deep Web.

    Here's the thing. To the child trafficking, human trafficking has exploded because we're a global world. We can sell and buy globally. Before, the criminals couldn't do it globally. They couldn't move the people as fast.

    TechRepublic: If we're putting this in very cynical market terms, the market for humans has grown due to the Deep Web?

    UN CITO: Yes. The market has grown for sex trafficking, or for organs, or for just basic labor. There are many reasons where this has happened. We're seeing tremendous growth in criminal activity. It's very difficult to find criminals. Drug trafficking is easier. Commerce is easier in the Deep Web. All of that is going up.

    Humans and 99% are good but you've got the 1%, and I think we have a plan to react to the criminal activities. At the UN we are beginning to build the cyber-expertise to become a catalyst. Not to resolve these issues, because I look at the internet as an infant that we have created, this species we've created which is growing and it's evolving. It's going through "terrible twos" right now. We have a choice to try to manage it, censor it, or shut it down, which we see in some countries. Or we have a choice to build its antibody. Make sure that it becomes strong.

    We [can] create the "Light Web," and I think we can only do it through the use of all the amazing technology people globally want to [use to] do good. As a social group, we can create positive algorithms for social good.

    Encryption and cybersecurity

    TechRepublic: In the digital world, the notion of sovereignty is shifting. What is the UN's role in terms of cybersecurity?

    UN CITO: It's shifting, exactly, because government rule over a civil society in a cyber-world doesn't exist. Do you think that criminals care that the UN or governments have a policy, or a rule? Countries and criminals will begin to attack each other.

    From our perspective, our mission is really peace and security, development of human rights. The UN has a number of responsibilities. We have peacekeeping, human rights, development, and sustainable development. We look at cybersecurity, and we say that peace in the cyber-world is very different because countries are starting to attack each other, and starting to attack each [other's] industrial systems. Often attacks are asymmetrical. Peace to me is very different than peace to you.

    We talk about cybersecurity. Okay, then what do we do? This is the world we've created through the internet. What do we do to bring peace to this world? What does anyone do?

    I think that we spend a lot of money on cybersecurity globally. Public and private money, and we are not successful, really. Intrusions happen every day. Intellectual property is lost. Privacy, the way we knew it, has changed completely. There's a new way of thinking about privacy, and what's confidential.

    We worry about industrial systems like our electric grid. We worry about our member states' industrial systems, intrusions into electricity, into water, and sanitation—things that impact human life.

    Our peacekeepers are out in the field. We have helicopters. We have planes. A big worry of ours is an intrusion into a plane or helicopter, where you think the fuel gauge is full but it's empty. Or through a GPS. If your GPS is impacted, and you think you're here but you're actually there.

    Where is the role of encryption? Encryption is amoral. It could be used for good. It could be used for bad. It's hard to have an opinion on encryption, for me at least, without realizing that the same thing I endorse for everyone, others endorse for criminals. Do we have the sophistication, the capabilities to limit that technology only for the good? I don't think we do.

    TechRepublic: What is the plan for cybersecurity?

    UN CITO: Well, I've been waiting. I think that is something for all the member states to come together and talk about cybersecurity.

    But what is the plan of us as homosapiens, now we are connected sapiens and very soon we are a combination of carbon and silicon? As super intelligent beings, what is the plan? This is not being talked about. We hope that through the creation of digital Blue Helmet we'd begin a conversation and we'd begin to ask people to contribute positively to what we believe is ethically right. But then again, what we believe is ethically right somebody else may believe is ethically wrong.

    Social Media

    TechRepublic: The UN recently held a conference on social media and terrorism, particularly related to Daesh [ISIS]. What was the discussion about? What takeaways came from that conference?

    UN CITO: Well, we got together as a lot of information and communication professionals, and academics to talk about the big issue of social media and terrorism with Daesh and ISIL. I think this type of dialog is really critical because if we don't talk about these issues, we can't come up with policy recommendations. I think there's a lot of really good discussion about human rights on the internet. "Thou shalt do no harm."

    But we know that whatever policies we come up with, Daesh would be the last group that cares whether you have policies or not. There's deeper discussion about how does youth get attracted to radicalism? You have 50% unemployment of youth. You have major income disparity. I think if we can't begin to address the basic social issues, we're going to have more and more youth attracted to this radicalism. There was good discussion and dialog that we need to address those issues.

    There's some discussion about how do we create the positive message? People, especially youth, want to do something positive. They want to participate. They want to be part of a bigger thing. How do we encourage them? When they look at the negative message, how do you bring in a positive message? Can governments to do something about that?

    Look at the private sector. When there was a Tylenol scare or Toyota speeding on its own, when you went online and you searched for Tylenol, you didn't get all the bad stories about Tylenol. You went into the sites that Tylenol wanted you to go. Search is so powerful, and if you can begin to write positive algorithms, that begins to move the youth to positive messaging.

    Don't try to use marketing or gimmicks because it's so transparent. People see right through it. Governments have a responsibility to provide a positive information space for their youth. There was a lot of good dialog around that.

    On the technology side, I think this is a two year old infant, the internet is amoral, and we can use it for good and use it for bad. You can't shut down the internet. You can't shut down social media. There's a very gray space because, as I said, somebody's freedom fighter is somebody else's terrorist. Is it for Facebook or Twitter to make that decision?

    Artificial intelligence

    TechRepublic: I know you are quite curious about artificial intelligence. Is there a UN policy with respect to AI?

    UN CITO: AI is an amazing thing to talk about, because now you can look at patterns much faster than humans [can]. Do we as technologists have the sophistication of addressing the moral and ethical issues of what's good and bad?

    I think this is what scares me when it comes to AI. Let's say we as humans say, "we want people to be happy and with artificial intelligence, we should build systems for people to be happy." What does that mean?

    I'm looking at the machine language, and the path we're creating for 10, 20, 30 years from now but not fully understanding the ethical programming that we're putting into the systems. IT people are creating the next world. The ethical programming they do is what is in their head, and so policies are being written in lines of code, in the algorithms.

    We look at artificial intelligence and machine learning, and the world we see as technologists 20 years from now is very different than the world we have today. Artificial intelligence is this super, super intelligent species that is not human. Humans have reached our limitation.

    That idea poses so many questions. If we create this artificial intelligence that can do 80% of the labor that humans do, what are the changes? Social, cultural, economic. All of these big, big questions have to be talked about.

    I'm hoping that's the United Nations, but there's so much political opposition to those conversations. So much political opposition because we are holding on to our physical borders, and we have forgotten that those physical borders are gone. The world is virtual. We sit here as heads of departments and ministers and talk about AI. We discuss the moral, the ethical issues that people are going to confront with AI technology—positive and negative.

    Source: TechRepublic

  • Using the right workforce options to develop AI with the help of data

    Using the right workforce options to develop AI with the help of data

    While it may seem like artificial intelligence (AI) has hit the jackpot, a lot of work needs to be done before its potential can really come to life. In our modern take on the 20th century space race, AI developers are hard at work on the next big breakthrough that will solve a problem and establish their expertise in the market. It takes a lot of hard work for innovators to deliver on their vision for AI, and it’s the data that serves as the lifeblood for advancement.  

    One of the biggest challenges AI developers face today is to process all the data that feeds into machine learning systems, a process that requires a reliable workforce with relevant domain expertise and high standards for quality. To address these obstacles and get ahead, many innovators are taking a page from the enterprise playbook: where alternative workforce models can provide a competitive edge in a crowded market. 

    Alternative workforce options

    Deloitte’s 2018 Global Human Capital Trends study found that only 42% of organizations surveyed said their workforce is made up of traditional salaried employees. Employers expect their dependence on contract, freelance and gig workers to dramatically increase over the next few years. Acceleratingthis trend is the pressure business leaders face to improve their workforce ecosystem as alternative workforce options bring the possibility for companies to advance services, move faster and leverage new skills. 

    While AI developers might be tempted to tap into new workforce solutions, identifying the right approach for their unique needs demands careful consideration. Here’s an overview of common workforce options and considerations for companies to select the right strategy for cleaning and structuring the messy, raw data that holds the potential to add rocket fuel to your AI efforts:

    • In-house employees: The first line of defense for most companies, internal teams can typically manage data needs with reasonably good quality. However, these processes often grow more difficult and costlier to manage as things progress, calling for a change of plans when it’s time to scale. That’s when companies are likely to turn to alternative workforce options to help structure data for AI development.
    • Contractors and freelancers: This is a common alternative to in-house teams, but business leaders will want to factor in extra time it will take to source and manage their freelance team. One-third of Deloitte’s survey respondents said their human resources (HR) departments are not involved in sourcing (39%) or hiring (35%) decisions for contract employees, which 'suggests that these workers are not subject to the cultural, skills, and other forms of assessments used for full-time employees'. That can be a problem when it comes to ensuring quality work, so companies should allocate additional time for sourcing, training and management.
    • Crowdsourcing: Crowdsourcing leverages the cloud to send data tasks to a large number of people at once. Quality is established using consensus, which means several people complete the same task. The answer provided by the majority of the workers is chosen as correct. Crowd workers are paid based on the number of tasks they complete on the platform provided by the workforce vendor, so it can take more time to process data outputs than it would with an in-house team. This can make crowdsourcing a less viable option for companies that are looking to scale quickly, particularly if their work requires a high level of quality, as with data that provides the intelligence for a self-driving car, for example.
    • Managed cloud workers: A solution that has emerged over the last decade, combining the quality of a trained, in-house team with the scalability of the crowd. It’s ideally suited for data work because dedicated teams develop expertise in a company’s business rules over time by sticking with projects for a longer period of time. That means they can increase their context and domain knowledge while providing consistently high data quality. However, teams need to be managed in ways that optimize productivity and engagement, and that takes something. Companies should look for partners with tested procedures for communication and process.

    Getting down to business

    From founders and data scientists to product owners and engineers, AI developers are fighting an uphill battle. They need all the support they can get, and that includes a dedicated team to process the data that serves as the lifeblood of AI and machine learning systems. When you combine the training and management challenges that AI developers face, workforce choices might just be the factor that determines success. With the right workforce strategy, companies will have the flexibility to respond to changes in market conditions, product development and business requirements.

    As with the space race, the pursuit AI in the real world holds untold promise, but victory won’t come easy. Progress is hard-won, and innovators who identify strong workforce partners will have the tools and talent they need to test their models, fail faster and ultimately get it right quicker. Companies that make this process a priority now can ensure they’re in the best position to break away from the competition as the AI race continues.

    Author: Mark Sears

    Source: Dataconomy

  • Wat te verwachten van BI in 2020 volgens Toucan Toco

    Wat te verwachten van BI in 2020 volgens Toucan Toco

    In 2020 wordt elke seconde bijna 1,7 MB aan data gegenereerd. Het potentieel van deze data is oneindig. Toucan Toco, specialist in data storytelling, ziet vijf manieren waarop Business Intelligence, oftewel het verzamelen, analyseren en presenteren van data om daarmee betere beslissingen te kunnen nemen, in 2020 gaat veranderen.  

    AI in BI

    Kunstmatige intelligentie (AI) heeft invloed op elk gebied binnen organisaties en business intelligence is daar geen uitzondering op. Het grote potentieel van deze technologie belooft menselijke intelligentie te vergroten door een revolutie teweeg te brengen in de manier waarop we omgaan met bedrijfsgegevens en analyses. Hebben we het beste van AI in BI al gezien? Toucan Toco zegt zeker van niet. Het is duidelijk dat AI enorme hoeveelheden gegevens sneller kan verwerken dan mensen. Bovendien biedt de technologie een nieuw perspectief in business intelligence en wordt het verkrijgen van inzichten die eerder onopgemerkt bleven gemakkelijker. Met de opkomst van explainable AI (vaak afgekort als XAI), oftewel het verklaren hoe kunstmatige intelligentie tot een bepaalde uitkomst komt, zal het niet lang duren voordat AI-beslissingen op een begrijpelijke manier gerechtvaardigd kunnen worden. En de verwachting is dat het komende jaar(en) meer kritieke ontwikkelingen zullen plaatsvinden. AI in BI is een blijvende ontwikkeling en de impact ervan zal ver na 2020 voelbaar zijn.

    Focus op datakwaliteit  

    Data zijn de levensader van elk bedrijf. Er is echter één essentieel voorbehoud: als data niet nauwkeurig, actueel, consistent en volledig zijn, kan dit niet alleen tot verkeerde beslissingen leiden, maar zelfs de winstgevendheid aantasten. IBM berekende dat alleen al in de VS bedrijven elk jaar 3,1 biljoen dollar verliezen vanwege slechte datakwaliteit. Slechte datakwaliteit is een probleem waar bedrijven van alle groottes al lang last van hebben. Bijgevolg wordt het probleem nog erger naarmate databronnen steeds meer met elkaar verweven raken.
    De opkomst van Data Quality Management zorgt voor verandering. Beheer van datakwaliteit is een integraal proces dat technologie, proces, de juiste mensen en organisatiecultuur combineert om data te leveren die niet alleen nauwkeurig maar ook nuttig zijn. Data Quality Management was een van de populairste focusgebieden in business intelligence in 2019. Elk bedrijf wil processen voor het optimaliseren van datakwaliteit implementeren om beter business intelligence toe te kunnen passen. In 2020 zal deze focus nog groter worden.   

    Overal actionable analytics

    Traditioneel bestaat er een letterlijke afstand tussen waar business intelligence-data wordt verzameld en waar BI-inzichten ontstaan. Om echter grip te houden op zakelijke workflows en processen, kunnen bedrijven niet langer data analyseren in de ene silo en actie ondernemen in de andere. Gelukkig zijn moderne BI-tools zo geëvolueerd dat bedrijfsdata beschikbaar gemaakt kunnen worden daar waar gebruikers actie willen ondernemen. Deze tools worden samengevoegd met kritieke bedrijfsprocessen en workflows door bijvoorbeeld dashboard-uitbreidingen en API's. Als gevolg hiervan is het nu eenvoudig om actionable analytics te implementeren om het besluitvormingsproces te versnellen. Zakelijke gebruikers kunnen nu data inzien, er bruikbare inzichten uit afleiden en deze implementeren, allemaal op één plek. De meeste BI-tools bieden bovendien mobile analytics om overal en altijd inzichten te leveren. Hoewel actionable analytics een van de opkomende trends in business intelligence is, is de populariteit ervan al groot en zal dit volgend jaar alleen maar toenemen.

    Data storytelling wordt de norm

    Data analyseren is één ding, data interpreteren en er lering uit trekken is een tweede. Juist die interpretatie en inzichten uit data vormen de leidraad voor besluitvorming in het Business Intelligence-proces. Bedrijven hebben zich gerealiseerd dat dashboards-cijfers alleen geen zin hebben als ze niet nauwkeurig zijn van een context zijn voorzien en kunnen worden geïnterpreteerd. In de datagedreven wereld wordt data storytelling daarom steeds belangrijker. Storytelling voegt context toe aan statistieken en biedt het verhaal dat nodig is om inzichten om te zetten in daden. In 2020 zorgt data storytelling voor een verdieping voor de manier waarop bedrijven data gebruiken om nieuwe inzichten te ontdekken.

    Data discovery verrijkt met datavisualisatie

    Data discovery is een proces waarbij data uit meerdere silo's en databases worden verzameld en samengevoegd in één bron om de analyse te vereenvoudigen. Dit helpt bedrijven ook om betere afstemming en samenwerking te realiseren tussen mensen die de data prepareren voor analyse en de mensen die de analyse uitvoeren en er inzichten uithalen. Systemen voor data discovery maken het steeds gemakkelijker voor iedere werknemer om toegang te krijgen tot data en er de informatie uit te halen die ze nodig hebben. Ook datavisualisatie ontwikkelt zich en is onder meer uitgebreid met heatmaps en geografische functionaliteit. Doordat data discovery en datavisualisatie de eindgebruiker steeds meer bieden verwacht Toucan Toco dat organisaties in 2020 de data die zij tot hun beschikking hebben nog beter weten te benutten en daarmee onverwachte inzichten kunnen blootleggen.

    Bron: BI-platform

  • What about the relation between AI and machine learning?

    Artificial intelligenceartificial intelligence machine learning is one of the most compelling areas of computer science research. AI technologies have gone through periods of innovation and growth, but never has AI research and development seemed as promising as it does now. This is due in part to amazing developments within machine learning, deep learning, and neural networks.

    Machine learning, a cutting-edge branch of artificial intelligence, is propelling the AI field further than ever before. While AI assistants like Siri, Cortana, and Bixby are useful, if not amusing, applications of AI, they lack the ability to learn, self-correct, and self-improve. 

    They are unable to operate outside of their code, learn independently, and apply past experiences to new problems. Machine learning is changing that. Machines are able to grow outside their original code which allows them to mimic the cognitive processes of the human mind.

    Why is machine learning important for AI? As you have most likely already gathered, machine learning is the branch of AI dedicated to endowing machines with the ability to learn. While there are programs that help sort your email, provide you with personalized recommendations based on your online shopping behavior, and make playlists based on music you like, these programs lack the ability to truly think for themselves. 

    These “weak AI” programs are able to analyze data well and conjure up impressive responses, they are far cry from true artificial intelligence. The only way to arrive at anything close to true artificial intelligence would require a machine to learn. A machine with true artificial intelligence, also known as artificial general intelligence, would be aware of its environment and would manipulate that environment to achieve its goals. A machine with artificial general intelligence would be no different from a human, who is aware of his or her surroundings and uses that awareness to arrive at solutions to problems occurring within those surroundings.

    You may be familiar with the infamous AlphaGo program that beat a professional Go player in 2016 to the chagrin of many professional Go players. While AI has been able to beat chess players in the past, the AI win came as an incredible shock to Go players and AI researchers alike. Surpassing Go players was previously thought to be impossible given that each move in the ancient has almost infinite permutations. Decisions in Go are so intricate and complex that it was thought that the game required human intuition. As it so happens, Go does not require human intuition, it only requires general-purpose learning algorithms.

    How were these general-purpose learning algorithms crafted? The AlphaGo program was created DeepMind Technologies, an AI company acquired by Google in 2014 that managed to create a neural network as well as a model that allowed for machines to mimic short-term memory utilizing researchers as well as C++, Lua, and Python developers. The neural network and the short-term memory model are applications of deep learning, a cutting-edge branch of machine learning.

    Deep learning is an approach to machine learning in which software emulates the human brain. Currently, machine learning applications allow for a machine to train in a certain task by analyzing examples of that task. Deep learning allows for machines to learn in a more general way. So, instead of simply mimicking cognitive functioning in a predefined task, machines are endowed with what can be thought of as a sort of artificial brain. This artificial brain is called a artificial neural network, or neural net for short.

    There are several neural net models in use today, and all use mathematics to copy the structure of the human brain. Neural nets are divided into layers, and consist of thousands, sometimes millions, of interconnected processing nodes. Connections between nodes is given a weight. If the weight is over a predefined threshold, then the node’s data is sent through the next layer. These nodes act as artificial neurons, sharing clusters of data and storing experience and knowledge based on that data, and firing off new bits of information. These nodes interact dynamically and change thresholds and weights as they learn from experience.

    Machine learning and deep learning are exciting and alarming areas of research within AI. Endowing machines with the ability to learn certain tasks could be extremely useful, could increase productivity, and help expedite all sorts of activities, from search algorithms to data mining. Deep learning provides even more opportunities for AI’s growth. As researchers delve deeper into deep learning, we could see machines that understand the mechanics behind learning itself, rather than simply mimicking intellectual tasks

    Author: Greg Robinson

    Source: Information Management

  • What is edge intelligence and how to apply it?

    What is edge intelligence and how to apply it?

    The term “edge intelligence,” also referred to as “intelligence on the edge,” describes a new phase in edge computing. Organizations are using edge intelligence to develop smarter factory floors, retail experiences, workspaces, buildings, and cities. The edge has become “intelligent” by way of analytics that were formerly limited to the cloud or in-house data centers. In this process, an intelligent remote sensor node may make a decision on the spot or send the data to a gateway for further screening before sending it to the cloud or another storage system.

    Mining big data for useful insights can be a major challenge. Searching through data is very much like panning for gold, a time consuming task with occasional rewards. Organizations are aware of the strategic importance of big data and analytics, but there are still hurdles to overcome.

    While data can give a business a competitive edge, there is also the potential to swamp their storage systems with worthless information. There is simply an overwhelming amount of data being created on a daily basis, much of which is useless. Asha Keddy, Corporate Vice President and Manager of Next Generation and Standards at Intel, stated, “We’re generating too much data.”

    Prior to edge computing, streams of data were sent straight from the internet of things (IoT) to a central data storage system. Early edge computing was an effort to provide a data screening process using micro-data stations (preferably within 100 square feet of the sensor nodes) to eliminate unnecessary or redundant data before sending it on. In simpler terms, early edge computing attempted to send leaner, more efficient data streams, with less data to store and process on the primary system.

    Cities, buildings, and industrial systems start with an edge sensor node, which senses and measures a specific range of information that is then used in making key decisions. The edge nodes can process data intelligently, and can bundle, refine, or encrypt the data for transmission to a data storage system. Ideally, an edge node is small, unobtrusive, and can fit in environments with minimal amounts of space.

    The intelligence aspect

    There are a wide variety of sensing devices available for use at the edge that provide all kinds of data on such things as vibrations, sound, temperature, humidity, motion, pressure, pollutants, audio, and video. The screened data is then transmitted through a gateway to the cloud for storage and further analysis. These gateways are essentially small servers, and exist between an organization’s cloud or data center, or its cloud and the sensors being used.

    Edge gateways have developed into architectural components that improve the performance of the internet of networks. These gateways are available as off-the-shelf devices that are adaptable enough to mix and match with the differing clouds and sensors. Different gateways are used for different tasks. Gateways needing to perform a real-time analysis of data from a factory floor will need to be more powerful than a gateway that simply tracks the location data of an automated fulfillment center.

    Connected sensors provide a broad range of information that should be used in making key decisions. The edge node is the data source, and if recorded information is faulty and of poor quality, use of the data can do more damage than good.

    Machine learning

    Machine learning (ML) is an important aspect of edge intelligence, and chips designed for running ML models are commercially available. ML can detect patterns and anomalies in the data stream and initiate the appropriate response.

    Machine learning provides support for factories, smart cities, smart grids, augmented and virtual reality, connected vehicles, and healthcare systems. ML models are trained in the cloud and then used to make the edge intelligent.

    Machine learning is an effective way of creating a functional AI. Many ML techniques, such as  decision trees, Bayesian networks, and K-means clustering have been developed to train the AI entity to make both classifications and predictions. Deep learning (a subdivision of the ML field) is one of the techniques, and uses an artificial neural network. Deep learning has resulted in impressive abilities to perform multiple tasks, classify images, and recognize faces.

    Artificial intelligence

    While machine learning is becoming quite popular with sensor nodes in the manufacturing industry, artificial intelligence (AI) is being applied to the big data being gathered from such things as social media contents, business informatics, and online shopping records.

    This data was generally sent to and stored in massive data centers. However, with the expansion of mobile computing and the internet of things, that trend is starting to reverse itself. Cisco has estimated that by 2021, nearly 850 ZB of data will be produced by all the people, machines, and things on the network edge.

    Transporting bulk data from the IoT devices (smart phones and iPads) to the cloud for analytics can be expensive and inefficient. A recent solution uses on-device analytics that run AI applications to process IoT data locally. This situation, however, is not ideal. These AI applications require significant computational power (the kind not available on a smart phone), and often suffer from low performance and energy efficiency issues.

    One proposal suggests dealing with these challenges by pushing cloud services from the network’s core, out to the network’s edges. An edge node sensor can be the smart phone or other mobile device. The sensor communicates with a network gateway, or a micro-data center. Physical proximity to data source devices is the most important characteristic in this situation. (Let’s say you have a smart phone. Its GPS would send a signal to a nearby 5G sensor on a telephone pole, which then sends it to a gateway that would determine your location, and then send the refined, finalized data to the cloud for storage or further analysis).

    Since 2009, Microsoft has been conducting continuous research on what applications should be shifted to the edge from the cloud. Their research ranges from voice command recognition to interactive cloud gaming to real-time video analytics.

    Real-time video analytics is predicted to become a very popular application for edge computing. As an application built atop computer vision, real-time video analytics will continuously gather high-definition videos taken from surveillance cameras. These applications require high computation, high bandwidth, and low latency to analyze the videos. This is made possible by extending the cloud’s AI to gateways covering the edge.

    The smart factory

    One type of sensor that is fast gaining popularity measures the vibrations of equipment with mechanical components (rotating shafts or gears). These multi-axis sensors measure the vibrational displacement of the equipment in real time. The vibrational displacement can then be processed and compared with the acceptable range of displacement. In a factory, analyzing this information can increase efficiency, reduce down-time, and predict mechanical failures before they happen. In some cases, a piece of equipment with a disintegrating mechanical component, which will cause further damage, can be shut down immediately.

    The time needed for sensor nodes to react can be dramatically reduced by including edge node analytics. A MEMS sensor, for example, will provide a warning when threshold limits are exceeded, and will immediately send out an alert. If data suggests the event is bad enough, the sensor may disable the equipment automatically, preventing a catastrophic breakdown.

    Smart city

    In smart cities, some industrial IoT edge node sensors can be used, such as an industrial camera with embedded video analytics. The mission statement of smart cities typically include the desire to integrate and communicate useful information to its citizens and employees. A common application provides parking space availability. Cameras can be used to identify a wide variety of objects (such as parked cars) and identify motion. This can be used to analyze movement historically, as well.

    Other sensors are designed specifically for smart cities, such as pollution sensors that warn city officials when a business has exceeded its allowable standards. A sensor for sound levels can be installed in some areas, or a sensor might be used to monitor vehicles and pedestrian traffic to optimize walking and driving routes. Citizens can have their energy and water consumption monitored to get advice on reducing their usage. The increasing use of automated decision-making in our devices, apps, and business processes makes AI essential to staying competitive.

    The future of edge computing

    The intelligent edge continues to gain in popularity, connecting devices and systems to gather and analyze data. The number of IoT devices being used worldwide has exploded, and cloud computing is becoming overwhelmed with the volume of data being produced. The intelligent edge not only provides real-time insights on operational efficiency, such as improving maintenance for vital equipment before it breaks down, but also screens out useless data.

    A seamless, synchronized user experience is basic goal of many internet organizations. For technology vendors, the intelligent edge and its connected devices provide opportunities for developing smarter, more integrated systems. These connected devices reduce the cloud’s burden by screening out useless data, and businesses ignoring the concept of edge computing will inevitably lose any competitive advantage they might have had in manufacturing or customer service.

    Author: Keith D. Foote

    Source: Dataversity

  • What is the impact of AI on cybersecurity?

    What is the impact of AI on cybersecurity?

    In today's technology-driven world we are becoming increasingly dependent on various technological tools to help us finish everyday tasks much faster or even do them for us, artificial intelligence being the most advanced one. While some welcome it open-handed, others are more wary, urging for increased protection.

    We cannot deny how much AI has infiltrated our lives. We are surrounded by it every day, which many don't even realize. Some of its simplest forms are virtual assistants (VA) used by 72% of the consumers in the USA. AI is advancing super-fast, causing serious ethical discussions.

    Not long ago some of the world's most brilliant minds like Stephen Hawking and Elon Musk have warned about the possible ramifications if the development of artificial intelligence wasn't controlled. Hawking even stated that AI could be the worst event in the history of our civilization. But whether we like it or not, the dominance of autonomous technology is inevitable.

    Security in the first place

    When it comes to cybersecurity, companies are spending huge amounts of money on maximizing its efficiency, in the face of the continually growing rates of cybercrime (up by 11% since last year). It's not surprising since the average cost of cybercrime has increased to $13 million, with average 145 security breaches in 2019, and counting.

    Companies should not worry only about losing money and their own sensitive data, but about losing their customers as well. An IBM poll showed that 78% of respondents think that the company's ability to safeguard their private data is 'extremely' important, while 75% would not buy any of their products, no matter how great they are, if they don ́t believe they are able to protect their data.

    Due to a huge shortage of qualified cybersecurity professionals, with almost 3 million open positions, companies are more and more turning to implement AI into their cybersecurity protection systems. It is expected that by 2024 AI cybersecurity market will reach a staggering $35 billion, with businesses recognizing the need to implement an advanced technology which will keep pace with the fast-evolving cybercrime.

    But how safe is AI?

    While AI can contribute to an increased level of cyber protection, by assisting cybersecurity experts in reducing their workload and in time, with their learning algorithms, by adapting and detecting new threats much faster (today it takes more than half a year in average to detect a data breach), there is also the other side of the coin to consider.

    Just as cybercriminals can manipulate people to obtain sensitive information, they can do the same with artificial intelligence, taking spear-fishing to a whole new level. This represents a serious concern, with a vast majority (91%) of US and Japan professionals expecting that companies' AI will be used against them. The same applies to VAs, which record and store everything we say (personal information, business-related information, passwords, financial information…) which can be obtained by hackers.

    Detecting new vulnerabilities can become much easier with AI, while their ability to make independent decisions can be compromised, which can stay undetected for a while. This represents a huge potential for cybercriminals to launch massive attacks in disguise, especially if they use their own AIs to make these attacks more sophisticated or to build new types of malware. Another concern is that with an AI cybersecurity protection system in place, employees might fall into a false sense of security, thus becoming less cautious.


    With AI inevitably becoming an integral part of business protection systems worldwide, it is important to consider all of its aspects when introducing it, both good and bad. 

    With companies investing huge resources in their perfection, cybersecurity experts should simultaneously focus on minimizing any possibilities of AI being exploited by cybercriminals.

    Source: Datafloq

  • Where Artificial Intelligence Is Now and What’s Just Around the Corner

    artificial-intelligence-predictions-2-234x156Unexpected convergent consequences...this is what happens when eight different exponential technologies all explode onto the scene at once.

    This post (the second of seven) is a look at artificial intelligence. Future posts will look at other tech areas.

    An expert might be reasonably good at predicting the growth of a single exponential technology (e.g., the Internet of Things), but try to predict the future when A.I., robotics, VR, synthetic biology and computation are all doubling, morphing and recombining. You have a very exciting (read: unpredictable) future. ​ This year at my Abundance 360 Summit I decided to explore this concept in sessions I called "Convergence Catalyzers."

    For each technology, I brought in an industry expert to identify their Top 5 Recent Breakthroughs (2012-2015) and their Top 5 Anticipated Breakthroughs (2016-2018). Then, we explored the patterns that emerged.

    Artificial Intelligence — Context

    At A360 this year, my expert on AI was Stephen Gold, the CMO and VP of Business Development and Partner Programs at IBM Watson. Here's some context before we dive in.

    Artificial intelligence is the ability of a computer to understand what you're asking and then infer the best possible answer from all the available evidence.

    You may think of AI as Siri or Google Now on your iPhone, Jarvis from Iron Man or IBM's Watson.

    Progress of late is furious — an AI R&D arms race is underway among the world's top technology giants.

    Soon AI will become the most important human collaboration tool ever created, amplifying our abilities and providing a simple user interface to all exponential technologies. Ultimately, it's helping us speed toward a world of abundance.

    The implications of true AI are staggering, and I asked Stephen to share his top five breakthroughs from recent years to illustrate some of them.

    Recent Top 5 Breakthroughs in AI: 2011 - 2015

    "It's amazing," said Gold. "For 50 years, we've ideated about this idea of artificial intelligence. But it's only been in the last few years that we've seen a fundamental transformation in this technology."

    Here are the breakthroughs Stephen identified in artificial intelligence research from 2011-2015:

    1. IBM Watson wins Jeopardy demo's integration of natural language processing, machine learning (ML), and big data.

    In 2011, IBM's AI system, dubbed "Watson," won a game of Jeopardy against the top two all-time champions.

    This was a historic moment, the "Kitty Hawk moment" for artificial intelligence.

    "It was really the first substantial, commercial demonstration of the power of this technology," explained Gold. "We wanted to prove a point that you could bring together some very unique technologies: natural language technologies, artificial intelligence, the context, the machine learning and deep learning, analytics and data and do something purposeful that ideally could be commercialized."

    2. Siri/Google Now redefine human-data interaction.

    In the past few years, systems like Siri and Google Now opened our minds to the idea that we don't have to be tethered to a laptop to have seamless interaction with information.

    In this model, AIs will move from speech recognition to natural language interaction, to natural language generation, and eventually to an ability to write as well as receive information.

    3. Deep learning demonstrates how machines learn on their own, advance and adapt.

    "Machine learning is about man assisting computers. Deep learning is about systems beginning to progress and learn on their own," says Gold. "Historically, systems have always been trained. They've been programmed. And, over time, the programming languages changed. We certainly moved beyond FORTRAN and BASIC, but we've always been limited to this idea of conventional rules and logic and structured data."

    As we move into the area of AI and cognitive computing, we're exploring the ability of computers to do more unaided/unassisted learning.

    4. Image recognition and interpretation now rivals what humans can do — allowing for imagine interpretation and anomaly detection.

    Image recognition has exploded over the last few years. Facebook and Google Photos, for example, each have tens of billions of images on their platform. With this dataset, they (and many others) are developing technologies that go beyond facial recognition providing algorithms that can tell you what is in the image: a boat, plane, car, cat, dog, and so on.

    The crazy part is that the algorithms are better than humans at recognizing images. The implications are enormous. "Imagine," says Gold, "an AI able to examine an X-ray or CAT scan or MRI to report what looks abnormal."

    5. AI Apps proliferate: universities scramble to adopt AI curriculum

    As AI begins to impact every industry and every profession, there is a response where schools and universities are ramping up their AI and machine learning curriculum. IBM, for example, is working with over 150 partners to present both business and technology-oriented students with cognitive computing curricula.

    So what's in store for the near future?

    Anticipated Top AI Breakthroughs: 2016 – 2018

    Here are Gold's predictions for the most exciting, disruptive developments coming in AI in the next three years. As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.

    1. Next-gen A.I. systems will beat the Turing Test

    Alan Turing created the Turing Test over half a century ago as a way to determine a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.

    Loosely, if an artificial system passed the Turing Test, it could be considered "AI."

    Gold believes, "that for all practical purposes, these systems will pass the Turing Test" in the next three-year period.

    Perhaps more importantly, if it does, this event will accelerate the conversation about the proper use of these technologies and their applications.

    2. All five human senses (yes, including taste, smell and touch) will become part of the normal computing experience.

    AIs will begin to sense and use all five senses. "The sense of touch, smell, and hearing will become prominent in the use of AI," explained Gold. "It will begin to process all that additional incremental information."

    When applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses.

    3. Solving big problems: detect and deter terrorism, manage global climate change.

    AI will help solve some of society's most daunting challenges.

    Gold continues, "We've discussed AI's impact on healthcare. We're already seeing this technology being deployed in governments to assist in the understanding and preemptive discovery of terrorist activity."

    We'll see revolutions in how we manage climate change, redesign and democratize education, make scientific discoveries, leverage energy resources, and develop solutions to difficult problems.

    4. Leverage ALL health data (genomic, phenotypic, social) to redefine the practice of medicine.

    "I think AI's effect on healthcare will be far more pervasive and far quicker than anyone anticipates," says Gold. "Even today, AI/machine learning is being used in oncology to identify optimal treatment patterns."

    But it goes far beyond this. AI is being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences.

    5. AI will be woven into the very fabric of our lives — physically and virtually.

    Ultimately, during the AI revolution taking place in the next three years, AIs will be integrated into everything around us, combining sensors and networks and making all systems "smart."

    AIs will push forward the ideas of transparency, of seamless interaction with devices and information, making everything personalized and easy to use. We'll be able to harness that sensor data and put it into an actionable form, at the moment when we need to make a decision.

    Source: SingularityHub

  • Why machine learning has a major impact on all industries

    Why machine learning has a major impact on all industries

    Machine learning is having a major impact on the global marketplace. It will have a profound effect on companies of all sizes over the next few years.

    Artificial Intelligence is surrounding us everywhere. We cannot go with our day without approaching a solution involving AI. Machine learning is a field of Artificial Intelligence which specializes in setting machine using algorithms to learn certain things by itself.

    Machine learning has a vast number of applications. We can approach machine learning systems by going out shopping, using our banking account or even in public transport.

    How much is machine learning changing things up? What is the demand for this new technology? One estimate pegged the global market for machine learning at $2.5 billion in 2017 and estimated that it would reach $12.3 billion less than a decade later. These estimates have been raised even higher by a newer study by Deloitte. This is proof that it is in high demand and is making a huge splash on the global marketplace.

    Why is machine learning everywhere now?

    Machine learning can be beneficial for your company in many ways. Of course, these applications depend on the needs of your organization.

    It can be used in various ways. For example, if we have a problem with managing our customer service, we should consider implementing a machine learning application in this part of our company. In 2013, a company named DigitalGenius was founded to use machine learning to solve a number of customer service issues.

    But AI can do so much more!

    With an AI application (based on machine learning algorithms) that is built specially to fit our needs, we can automate any repetitive tasks like doing our company’s monthly paperwork. Our employees could then focus on more creative tasks that cannot be accomplished by an algorithm. Deloitte points out that machine learning is invaluable for boosting efficiency in many organizations. This is one of the reasons they estimate 2021 spending on machine learning will exceed $57 billion.

    One of the most wanted features of machine learning AI is the capability to predict certain things. AI can analyze the market or the data we provide to make assumptions that turn out to be mostly accurate. Thanks to that feature AI among other things can target products to customers based on their shopping habits and online actions.

    In which company can machine learning be most beneficial?

    Nowadays AI is implemented nearly in every field of business. The most inspiring examples are in the medical industry. Artificial Intelligence can improve performing various tests which in result can profit in saving more lives. Quicker diagnosis is a quicker recovery.

    Due to that, we cannot specify what field of business can benefit the most from implementing machine learning AI into their company’s system. It also does not depend on the size of the company. Of course, the small enterprises have a smaller amount of money that they can invest but with AI the more time and effort we put into making and implementing the application the more time and money we are likely to pull out in the future. It is a long-distance investment but without a doubt a smart one.

    What type of AI is the most suited for us?

    It is really important though to choose wisely from different types of AI. It should suit our company’s needs only. It is best if we have big data to manage. If the provided information is small in the amount a machine learning solution may not be the best option for us. Machine learning AI to function properly should be provided with a vast amount of 'good' data that can be systematized into patterns. So the best thing to do is to hire a data scientist who can initially manage our big data and then present it to our algorithm.

    If we are still not certain whether we should or should not invest in an AI solution the best thing we can do is to contact a professional machine learning expert. One way out is to reach out to the company providing these types of services.

    Machine learning is driving countless changes in every industry

    Machine learning is having a major impact on the global marketplace. Companies in every industry are using machine learning technology to increase efficiency and boost output. It will have a profound effect on companies of all sizes over the next few years.

    Author: Ryan Kh

    Source: SmartDataCollective

  • Why the right data input is key: A Machine Learning example

    Why the right data input is key: A Machine Learning example

    Finding the ‘sweet spot’ of data needs and consumption is critical to a business. Without enough, the business model under performs. Too much and you run the risk of compromised security and protection. Measuring what data intake is needed, like a balanced diet, is key to optimum performance and output. A healthy diet of data will set a company on the road to maximum results without drifting into red areas either side. 

    Machine learning is not black magic. A simple definition is the application of learning algorithms to data to uncover useful aspects of the input. There are clearly two parts to this process, though: the algorithms themselves and the data being processed and fed in.

    The algorithms are vital, and continually tuning and improving them makes significant difference to the success of the solutions. However, these are just mathematical experiments on the data. The pivotal bit is the data itself. Quite simply, the algorithms cannot work well on poor data volume, and a deficit of data leaves the system undernourished and, ultimately, the system hungering for more. With more data to consume, the system can be trained more fully and the outcomes are stronger.

    Without question, there is a big need for an ample amount of data to offer the system a healthy helping to configure the best outcomes. What is crucial, though, is that the data collected is representative of the tasks you intend to perform.

    Within speech recognition, for example, this means that you might be interested in any or all of the following attributes:


    • formal speech/informal speech
    • prepared speech/unprepared speech
    • trained speakers/untrained speakers
    • presenter/conversational
    • general speech/specific speech
    • accents/dialects


    • noisy/quiet
    • professional recording/amateur recording
    • broadcast/telephony
    • controlled/uncontrolled

    In reality, all of these attributes impact the ability to perform the tasks required of speech recognition with ultimate accuracy. Therefore, the data needed to tick all the boxes is different and involves varying degrees of difficulty to obtain. Bear in mind that it is not just the audio that is needed, accurate transcripts are required to perform training. That probably means that most data will need to be listened to by humans to transcribe or validate the data, and that can create an issue of security.

    An automatic speech recognition (ASR) system operates in two modes: training and operating.


    Training is most likely managed by the AI/ML company providing the service, which means the company needs access to large amounts of relevant data. In some cases, this is readily available in the public domain anyway. For example, content that has already been broadcast on television or radio and therefore has no associated privacy issues. But this sort of content cannot help with many of the other scenarios in which ASR technology can be used, such as phone call transcription, which has many different translation characteristics. Obtaining this sort of data can be tied up with contracts for data ownership, privacy and usage restrictions.


    In operational use, there is no need to collect audio. You just use the models that have previously been trained. But the obvious temptation is to capture the operational data and use it. However, as mentioned, this is where the challenge begins: ownership of the data. Many cloud solution providers want to use the data openly, as it will enable continuous improvement for the required use cases. Data ownership becomes the lynchpin.

    The challenge is to be able to build great models that work really well in any scenario without capturing privately-owned data. A balance between quality and security must be struck. This trade-off happens in many computer systems but somehow data involving people’s voices often, understandably, generates a great deal of concern.

    Finding a solution

    To ultimately satiate an ASR system, there needs to be just enough data provided to execute the training so good systems can be built. There is an option for companies to train their own models, which enables them to maintain ownership of the data. This can often require a complex professional services agreement, requiring a good investment of time, but it can provide a solution at a reasonable cost very quickly.

    ML algorithms are in a constant state of evolution, and techniques can now be used that allow smaller data sets to be used to bias systems already trained on big data. In some cases, smaller amounts of data can achieve ‘good enough’ accuracy. The overall issue of data acquisition is not removed, but sometimes less data can provide solutions.

    Finding a balanced data diet by enabling better algorithm tuning, and filtering and selection of data, can get the best results without collecting everything that has ever been said. More effort may be needed to achieve the best equilibrium. And, without doubt, the industry must maintain its search for ways to make the technology work better without people’s privacy being compromised.

    Author: Ian Firth

    Source: Insidebigdata

  • Why we should be aware of AI bias in lending

    Why we should be aware of AI bias in lending

    It seems that beyond all the hype AI (artificial intelligence) applications in lending do speed up and automate decision-making.

    Indeed, a couple of months ago Upstart, an AI-leveraging fintech startup, announced that it had raised a total of $160 million since inception. It also inked deals with the First National Bank of Omaha and the First Federal Bank of Kansas City.

    Upstart won recognition due to its innovative approach toward lending. The platform identifies who should get a loan and of what amount using AI trained with the so-called ‘alternative data’. Such alternative data can include information on an applicant’s purchases, type of phone, favorite games, and social media friends’ average credit score.

    However, the use of alternative data in lending is still far from making the process faster, fairer, and wholly GDPR-compliant. Besides, it's not an absolute novelty.

    Early credit agencies hired specialists to dig into local gossip on their customers, while back in 1935 neighborhoods in the U.S. got classified according to their collective creditworthiness. In a more recent case from 2002, a Canadian Tire executive analyzed last year’s transactional data to discover that customers buying roof cleaning tools were more financially reliable than those purchasing cheap motor oil.

    There's one significant difference to the past and the present, however. Earlier, it was a human who collected and processed both alternative and traditional data, including debt-to-income, loan-to-value, and individual credit history. Now, the algorithm is stepping forward as many believe it to be more objective as well as faster.

    What gives cause for concern, though, is that AI can turn out to be no less biased than humans. Heads up: if we don’t control how the algorithm self-learns, AI can go even more one-sided.

    Where AI bias creeps in

    Generally, AI bias doesn’t happen by accident. People who train the algorithm make it subjective. Influenced by some personal, cultural, educational, and location-specific factors, even the best algorithm trainers might use inherently prejudiced input data.

    If not detected timely, it can result in biased decisions, which will only aggravate with time. That's because the algorithm takes its new decisions based on the previous ones. Evolving on its own, it ends up being much more complex than in the beginning of its operation (the classical snowball effect). In plain words, it continuously learns by itself, whether the educational material is correct or not.

    Now, let’s look at how exactly AI might discriminate in the lending decisions it makes. Looking at the examples below, you'll easily follow the key idea: AI bias often goes back to human prejudice.

    AI can discriminate based on gender

    While there are traditionally more men in senior and higher-paid positions, women continue facing the so-called ‘glass ceiling’ and pay gap problems. As a result, even though women on average tend to be better savers and payers, female entrepreneurs continue receiving fewer and smaller business loans compared to men.

    The use of AI might only worsen the tendency, since the sexist input data can lead to a spate of loan denials among women. Relying on misrepresentational statistics, AI algorithms might rather favor a male applicant over a female one even if all other parameters are relatively similar.

    AI can discriminate based on race

    This sounds harsh, but black applicants are twice as likely to be refused mortgage as white ones. If the input data used for the algorithm learning reflects such a racial disparity, it can put it into practice pretty fast and start causing more and more denials.

    Alternative data can also become the source of 'AI racism’. Consider the algorithm using the seemingly neutral information on an applicant’s prior fines and arrests. The truth is, such information is not neutral. According to The Washington Post, African-Americans become policing targets much more frequently than white population, and in many cases baselessly.

    The same goes for some other types of data. Racial minorities face inequality in occupation, and neighborhoods they live in. All of these kinds of metrics might become solid reasons for AI to say ‘no’ to a non-white applicant.

    AI can discriminate based on age

    The longer a credit history, the more we know about a particular person’s creditworthiness. Older people typically have larger credit histories, as there are more financial transactions behind their backs.

    The young generation, on the contrary, has less data about their operations, which can become an unfair reason for a credit denial.

    AI can discriminate based on education

    Consider an AI lending algorithm that analyzes an applicant’s grammar and spelling while making credit decisions. An algorithm might ‘learn’ that bad spelling habits or constant typos point to poor education and, consequently, bad creditworthiness.

    In the long run, the algorithm can start avoiding qualifying individuals with writing difficulties or disorders even if those have nothing to do with such people’s ability to pay bills.

    Tackling prejudice in lending

    Overall, in order to make AI-run loan processes free of bias, it's crucial to make the input data clean from any possible human prejudice, from misogyny and racism to ageism.

    To make training data more neutral, organizations should form more diverse AI development teams of both lenders and data scientists, where the former can inform engineers on the specifics of their job. What's more, such financial organizations should train everyone involved in making decisions with AI to adhere and enforce fair and non-discriminatory practices in their work. Otherwise, without taking measures to ensure diversity and inclusivity, lending businesses risk to generate AI algorithms that can severely violate anti-discrimination and fair-lending laws.

    Another step toward fairer AI is to make sure that there are no lending decisions made solely by the algorithm; a human supervisor should assess these decisions before they make a real-life impact. Article 22 of the GDPR stands with it, claiming that people should not be subjected to purely automated decision-making, specifically if this can have a legal effect.

    The truth is, this is easier said than done. However, if not addressed, the problem of unintentional AI bias might put lending businesses in a tough spot no less than any intentional act of bias, and only through collective effort of data scientists and lending professionals can we avert imminent risks. 

    Author: Yaroslav Kuflinski

    Source: Information-management

  • Wie domineert straks: de mens of de machine?

    mens of machineDe ontwikkelingen op informatie-technologisch gebied gaan snel en misschien wel steeds sneller. We horen en zien steeds meer van business intelligence, self service BI, artificial intelligence en machine learning. We zien dit terug bij werknemers die steeds meer de beschikking hebben over stuurinformatie via tools, zelfsturende auto’s, robots voor dementerenden maar ook computers die de mens verslaan spelletjes.

    Wat betekent dit?

    • Verdienmodel van bedrijven zullen anders worden
    • Innovaties komen misschien niet meer primair van de mens
    • Veel meer nu nog menselijke arbeid zal door machines worden overgenomen.

    Een paar ontwikkelingen in dit artikel worden uitgelicht om aan te geven hoe belangrijk business intelligence vandaag de dag is.

    Verdienmodel op basis van data

    Dat de informatietechnologie bestaande verdienmodellen op z’n kop zet lezen we dagelijks. We hoeven alleen maar naar V&D te kijken. De hoeveelheid bedrijven  die gebruik maken van een business model waarbij externe dataverzameling en analyse een cruciaal onderdeel is van het verdienmodel neemt hand over hand toe. Zelfs in tot nu toe sterk gedomineerde overheidssectoren zoals onderwijs of gezondheidszorg. Bekende bedrijven, zoals Google en Facebook, zijn overigens zonder concreet verdienmodel begonnen, maar zouden niets meer kunnen zonder genoemde data(analyse).


    Neem bijvoorbeeld een bedrijf als Amazon dat volledig draait op data. De verzamelde data heeft in grote mate betrekking op wie we zijn, hoe we ons gedragen en op onze voorkeuren. Amazon geeft deze data steeds meer betekenis door de toepassingen van de nieuwste technologieën. Een voorbeeld is hoe Amazon zelfs films en boeken ontwikkelt op basis van ons aankoop, kijk- en leesgedrag en hier zal het zeker niet bij blijven. Volgens Gartner is Amazon een van meeste leidende en visionaire spelers in de markt voor Infrastructure as a Service (IaaS). Bovendien prijst Gartner Amazon voor haar snelle manier van anticiperen op de technologische behoeftes uit de markt.


    Volgens de Verenigde Naties zullen de nieuwste innovaties ontstaan vanuit kunstmatige intelligentie. Dit veronderstelt dat de machine de mens passeert met betrekking het bedenken van vernieuwingen. De IBM Watson-computer heeft bijvoorbeeld de mens al verslagen met het spelprogramma Jeopardy. Met moeilijke wiskundige berekening kunnen we niet meer zonder computer, maar dat wil nog niet zeggen dat de computer de mens overal in voorbij streeft. Met de ontwikkeling van zelfsturende auto’s is onlangs aangetoond dat middels machine learning de mens nog steeds leidend kan zijn en per saldo was er veel minder ontwikkelingstijd nodig.

    Mens of machine?

    Een feit is dat de machine steeds meer taken van de mens gaat overnemen en de mens in denkvermogen soms zelfs gaat overtreffen. De mens en machine zullen in de komende periode steeds meer naast elkaar gaan leven en de computer zal het menselijk handelen steeds beter begrijpen en beheersen. Het gevolg is, dat bestaande business modellen zullen gaan veranderen en veel banen in bestaande sectoren verloren zullen gaan. Maar of de computer de mens voorbij streeft en dat in de toekomst zelfs alleen innovatie via kunstmatige intelligentie komt is nog maar de vraag? Ok de industriële revolutie heeft een zeer grote impact op de mensheid gehad en terugkijkend heeft deze vele voordelen gebracht al zal het voor velen in die tijd niet altijd gemakkelijk geweest zijn. Laten we kijken hoe we hier ons voordeel mee kunnen doen. Geïnteresseerd? Klik hier voor meer informatie.

    Ruud Koopmans, RK-Intelligentie.nl, 29 februari 2016


EasyTagCloud v2.8