6 items tagged "big data,"

  • 10 Big Data Trends for 2017

    big-dataInfogix, a leader in helping companies provide end-to-end data analysis across the enterprise, today highlighted the top 10 data trends they foresee will be strategic for most organizations in 2017.
     
    “This year’s trends examine the evolving ways enterprises can realize better business value with big data and how improving business intelligence can help transform organization processes and the customer experience (CX),” said Sumit Nijhawan, CEO and President of Infogix. “Business executives are demanding better data management for compliance and increased confidence to steer the business, more rapid adoption of big data and innovative and transformative data analytic technologies.”
     
    The top 10 data trends for 2017 are assembled by a panel of Infogix senior executives. The key trends include:
     
    1.    The Proliferation of Big Data
        Proliferation of big data has made it crucial to analyze data quickly to gain valuable insight.
        Organizations must turn the terabytes of big data that is not being used, classified as dark data, into useable data.   
        Big data has not yet yielded the substantial results that organizations require to develop new insights for new, innovative offerings to derive a competitive advantage
     
    2.    The Use of Big Data to Improve CX
        Using big data to improve CX by moving from legacy to vendor systems, during M&A, and with core system upgrades.
        Analyzing data with self-service flexibility to quickly harness insights about leading trends, along with competitive insight into new customer acquisition growth opportunities.
        Using big data to better understand customers in order to improve top line revenue through cross-sell/upsell or remove risk of lost revenue by reducing churn.
     
    3.    Wider Adoption of Hadoop
        More and more organizations will be adopting Hadoop and other big data stores, in turn, vendors will rapidly introduce new, innovative Hadoop solutions.
        With Hadoop in place, organizations will be able to crunch large amounts of data using advanced analytics to find nuggets of valuable information for making profitable decisions.
     
    4.    Hello to Predictive Analytics
        Precisely predict future behaviors and events to improve profitability.
        Make a leap in improving fraud detection rapidly to minimize revenue risk exposure and improve operational excellence.
     
    5.    More Focus on Cloud-Based Data Analytics
        Moving data analytics to the cloud accelerates adoption of the latest capabilities to turn data into action.
        Cut costs in ongoing maintenance and operations by moving data analytics to the cloud.
     
    6.    The Move toward Informatics and the Ability to Identify the Value of Data
        Use informatics to help integrate the collection, analysis and visualization of complex data to derive revenue and efficiency value from that data
        Tap an underused resource – data – to increase business performance
     
    7.    Achieving Maximum Business Intelligence with Data Virtualization
        Data virtualization unlocks what is hidden within large data sets.
        Graphic data virtualization allows organizations to retrieve and manipulate data on the fly regardless of how the data is formatted or where it is located.
     
    8.    Convergence of IoT, the Cloud, Big Data, and Cybersecurity
        The convergence of data management technologies such as data quality, data preparation, data analytics, data integration and more.
        As we continue to become more reliant on smart devices, inter-connectivity and machine learning will become even more important to protect these assets from cyber security threats.
     
    9.    Improving Digital Channel Optimization and the Omnichannel Experience
        Delivering the balance of traditional channels with digital channels to connect with the customer in their preferred channel.
        Continuously looking for innovative ways to enhance CX across channels to achieve a competitive advantage.
     
    10.    Self-Service Data Preparation and Analytics to Improve Efficiency
        Self-service data preparation tools boost time to value enabling organizations to prepare data regardless of the type of data, whether structured, semi-structured or unstructured.
        Decreased reliance on development teams to massage the data by introducing more self-service capabilities to give power to the user and, in turn, improve operational efficiency.
     
    “Every year we see more data being generated than ever before and organizations across all industries struggle with its trustworthiness and quality. We believe the technology trends of cloud, predictive analysis and big data will not only help organizations deal with the vast amount of data, but help enterprises address today’s business challenges,” said Nijhawan. “However, before these trends lead to the next wave of business, it’s critical that organizations understand that the success is predicated upon data integrity.”
     
    Source: dzone.com, November 20, 2016
  • CBS richt Center for Big Data Statistics op

    Het Centraal Bureau voor de Statistiek (CBS) start eind september met een Center for Big Data Statistics. Doel is het ontwikkelen van nieuwe oplossingen op basis van big data-technologie. Het nieuwe expertisecenter van het CBS gaat samenwerken met diverse nationale en internationale organisaties.

    De officiële opening van het CBS Center for Big Data Statistics is op 27 september aanstaande en vindt plaats in het kader van een bezoek van minister-president Mark Rutte aan Zuid-Korea. Het statistiekbureau van dit Aziatische land is een van de partners waarmee het expertisecenter gaat samenwerken. Ook soortgelijke bureaus in Engeland en Italië, alsmede de Wereldbank en een aantal buitenlandse universiteiten behoren tot de samenwerkende partijen.

    Daarnaast werkt het CBS Center for Big Data Statistics samen met een aantal nationale partners. Tijdens de opening op 12 september 2016 van Brightlands Smart Services Campus in Heerlen ondertekende het CBS een eerste intentieverklaring hiervoor met deze Brightlands-campus, de Universiteit Maastricht, de Open Universiteit en Zuyd Hogeschool. Om de binding te versterken neemt CBS deel aan het lectoraat Visualisatie aan de Smart Services Campus.

    Heerlen

    Het zwaartepunt van de big data-afdeling ligt in Heerlen, waar het CBS ook een vestiging heeft, vertelt directeur-generaal Tjark Tjin-a-Tsoi, na afloop van de openingsceremonie op de campus in Heerlen. ‘Wij beschikken als statistiekbureau natuurlijk over enorme verzamelingen gegevens. Door middel van big data-toepassingen willen we nog meer patronen in data kunnen blootleggen. Ook willen we beter, sneller en gedetailleerder antwoord kunnen geven op vragen over en vanuit de samenleving, tegen lagere kosten en met minder administratieve lasten. We zoeken bijvoorbeeld naar mogelijkheden om bedrijven minder te belasten met onze uitvraag naar hun gegevens.’

    Faam

    Het CBS heeft eigen applicatie-ontwikkelaars in dienst, maar het bureau kiest bewust voor nationale en internationale samenwerking. Tjin-a-Tsoi: ‘De tijd dat we een Spss-bolwerk waren, ligt al een tijdje achter ons. We zijn echt op zoek naar vernieuwende oplossingen en daarvoor is samenwerking met andere experts uit het onderwijs, de overheid en het bedrijfsleven op het gebied van big data cruciaal. Het CBS staat internationaal bekend om zijn statistische methoden, die we ook aan andere organisaties doorverkopen. Die faam willen we met onze toekomstige big data-oplossingen ook weer behalen.’

    Bron: computable, 14 september 2016

     

  • How Big Data Is Changing Disruptive Innovation

    jan16-27-543369765-1024x576Much fanfare has been paid to the term “disruptive innovation” over the past few years. Professor Clayton M. Christensen has even re-entered the fold clarifying what he means when he uses the term. Despite the many differences in application, most people agree on the following. Disruptive innovations are:

    Cheaper (from a customer perspective)

    More accessible (from a usability or distribution perspective)

    And use a business model with structural cost advantages (relative to existing solutions)

    The reason these characteristics of disruption are important are that when all three are present, it’s difficult for an existing business to respond to competition. Whether a company is saddled with fixed infrastructure, highly trained specialist employees, or an outmoded distribution system, quickly adapting to new environments is challenging when one or all of those things becomes obsolete. Firing hundreds of employees, upsetting your core business’ distribution partners, writing off billions of dollars of investment — these things are difficult for managers to even contemplate, and with good reason.

    Historically, the place we’ve looked for hints of oncoming disruptions has been in the low end of the market. Because disruptive products were cheaper, more accessible, and built on new technology architectures, they tended to be crummier than the existing highest-end solutions. Their cost advantage allowed them to reach customers who’d been priced out of an existing market; Apple originally made a computer that was cheap enough for students to learn on, a population that wouldn’t have dreamt of purchasing a DEC minicomputer. Sony famously made the transistor-based television popular based on its “portability.” No one knew that you could reasonably do that prior to the transistor. New technologies, combined with business model innovation, provide the structural cost advantage necessary to take large chunks of the market over time.

    But if you return to the definition above, the fact that low-end entry was typical of a disruptive approach was was never core to the phenomenon. Instead, it was a byproduct. Why? Because any new entrant is hard pressed to deliver superior value to a mature market, where products have been refined over decades.

    But although the low-end approach was pretty common, it wasn’t what was holding incumbent firms captive. It was their own cost structures and their focus on driving marginal profit increases that kept those companies headed down the wrong paths. As long making the right decision on a short-term basis (trying to drive more value out of outdated infrastructure) is the wrong decision on a long-term basis (failing to adopt new technology platforms), CEOs are destined to struggle.

    Unfortunately, the focus on the low-end approach of disruption is actually clouding our ability to spot the things that are: cheaper, more accessible, and built on an advantaged cost structure. Specifically, it appears that data-enabled disruptors often confound industry pundits. To get a sense for the point, just look to a few highly contested examples.

    Is Uber disruptive? The wrong answer would be to say, “No, because their first product started in the high end of the market.” The right answer would be to acknowledge that the platform they ultimately launched allowed them to add lower cost drivers (in the form of UberX) and offer cheaper, more accessible, transportation options with a structural cost advantage to both taxi services and potentially even car ownership. The convenience of the app is only the most obvious, and easiest to copy, factor.

    Were Google’s Android phones disruptive to Nokia? The wrong answer would be to say “No, because the initial smartphones they launched were superior in feature quality to Nokia’s own phones that dominated the global landscape.” The right answer would be to acknowledge that the approach of creating an ecosystem of application development atop its platform allowed them to build far more comprehensive solutions, that were (on the whole) cheaper, more accessible, and structurally cost advantaged over Nokia.

    Is 23andMe potentially disruptive to pharmaceutical companies? The wrong answer would be to say, “No, because they compete in completely different verticals.” One in ancestry and the other in drug development. The right answer would be to acknowledge that 23andMe has a vast amount of data that could enable them to start developing drugs in a cheaper, more accessible, and structurally advantaged model.

    In every one of these examples, the ultimate end is disruption. In every one of these examples, incumbent managers have a short term incentive to ignore the challenge — making best use of their existing infrastructure. Taxi companies tried to leverage regulation to preserve the value of their medallions and drivers. Nokia tried frivolously to protect its closed ecosystem and preserve employment for their thousands of Symbian focused staff members. And you can be certain that Merck, Pfizer, and Roche have strong incentives to make the best use of their high-end R&D functions before embracing the radically different path that 23andMe might take.

    And over the long term, each of these short-term decisions could lead to failure.

    The conversation misses that something new is going on in the world of innovation. With information at the core of most modern disruptions, there are new opportunities to attack industries from different angles. Uber built a platform in a fragmented limo market that let it come into transportation and logistics more broadly. Netflix captured your eyeballs through streaming video and used the data it had to blow up the content production process. Google mapped the world, and then took its understanding of traffic patterns and street layouts to build autonomous cars.

    There is no doubt that disruption is underway here. These players create products that are cheaper and more accessible than their peers. But it’s not necessarily starting at the low end of the market, it’s coming from orthogonal industries with strong information synergy. It’s starting where the source of data is, then building the information enabled system to attack an incumbent industry.

    It’s time for executives, entrepreneurs, and innovators stop quibbling over whether something satisfies the traditional path of disruption. Data-enabled disruption may represent an anomaly to the existing theory, but it’s here — and it’s here to stay. The waste laid to the taxi industry by Uber is example that the new solution had extraordinary cost advantages and that they couldn’t respond. The new questions should be:

    • “How can you adapt in the face of this new type of competition?”
    • “How do you evaluate new threats?”
    • “What capabilities do you need and where do you get them, when data is a critical piece of any new disruption?”


    To succeed in this new environment, threatened businesses need a thoughtful approach to identifying potential threats combined with the will to make the right long-term investments — despite short-term profit incentives.

    Source: Harvard Business review

  • How to pull back your data archives from the brink of chaos

    datachaos1Organisations of all shapes and sizes across the world are drowning in big data, there’s no doubt about it. Yet, with hard drives, servers, file cabinets and storage facilities across the UK at capacity, the volume of information being collected will only continue to increase in the coming years.

    What organisations are failing to see however, is that massive amounts of data are leading to cluttered archives and inefficient strategies that keep organisations from mining insights that could otherwise improve business outcomes.

    So what is data archiving and why is it so important?

    Not to be confused with data back-up, data archiving is the process of storing fixed content for future retrieval and use. While archiving data has typically meant moving less frequently accessed, static data into long-term storage, archiving now includes strategies such as archive in-place, data warehousing and fully indexed content placed on near-term storage solutions all designed to allow greater accessibility to data.

    These strategies make it faster and easier for organisations to meet increased legal and regulatory demands and create opportunities for businesses to synthesise the information necessary to inform critical business decisions.

    A recent study, 'Mining for Insight: Rediscovering the Data Archive', an IDC whitepaper sponsored by Iron Mountain, confirmed that organisations are indeed drowning in data and unable to effectively mine their data archives for key insights.

    The findings however, also indicate that a subset of organisations are in fact successfully leveraging their data archives and the benefits are impressive - as much as an additional $10M (£6.4M) in cost savings from streamlined IT and customer service operations.

    The more data you have, the more problems you’ll get

    The study found that without clear processes and pressure from the top to implement big data programmes, more than 48% of organisations simply archive everything to avoid investing time and resources upfront to determine what’s truly important.

    Over time, companies archiving everything quickly amass ‘data swamps’ making data hard to find when needed, as opposed to the ‘data lakes’ many businesses aspire to create with a crystal clear data archiving strategy for quick and easy information.

    Big data blindness

    Surprisingly, the study found that 72% of organisations in the UK believe they are already maximising the value of their archives. However, the study also found that only 32% of companies are actually using archives for business analysis, a critical process to drive additional revenue.

    This is a serious disconnect that demonstrates data archiving is a real blind spot for business leaders.

    Even more telling is the fact that a staggering 87% of organisations lack a uniform process for archiving across data types, making it possible to identify and access important information when needed.

    Archived data impacts the bottom line

    The study reveals that organisations with a well-defined data archive process stand to realise value from two potential avenues: cost savings and added revenue from monetising archives.

    On the savings front, nearly half (47%) of the organisations polled in the UK realised $1M (£640,000) or more in savings over the past year from risk mitigation and avoidance of litigation, with the top 19% reporting savings of more than $10M (£6.4M).

    Similarly, nearly half (45%) of organisations reaped $1M (£640,000) or more in savings stemming from reduced operational or capital costs, with the top 17% capturing more than $10M (£6.4M).

    More striking is an organisation’s ability to draw new revenue from an effectively managed data archive. More than a third (36%) of companies surveyed benefitted from an additional $1M (£640,000) or more in revenue, the top 12% gained more than $10M(£6.4M). On average, companies polled saw an additional $7.5M (£4.8M) in new revenue streams from their data archive.

    So how can organisations bridge the disconnect between perception and reality?

    Appoint a Chief Data Officer to oversee and derive value from the data archive, while working closely with the Chief Operating and Chief Information Officers to set long-term business and data strategies.

    Develop information maps of all data sources and repositories (and their value) across the organisation.

    Implement a holistic, consistent archiving strategy that addresses data retention schedules, use cases, the value of data, necessary accessibility and archive costs.

    And consider working with a third party vendor with specific expertise to help optimise your archiving solution while freeing up internal IT resources to focus on more strategic and innovative work.

    The disconnect between perception and reality when it comes to data archiving is real, and just because an organisation is drowning in big data, doesn’t mean it can’t get back on track.

    Source: Information Age

  • The 10 Commandments of Business Intelligence in Big Data

    shutterstock 10commandments styleuneed.de -200x120Organizations today don’t use previous generation architectures to store their big data. Why would they use previous-generation BI tools for big data analysis? When looking at BI tools for your organization, there are 10 “Commandments” you should live by.

    First Commandment: Thou Shalt Not Move Big Data
    Moving Big Data is expensive: it is big, after all, so physics is against you if you need to load it up and move it. Avoid extracting data out into data marts and cubes, because “extract” means moving, and creates big-data-sized problems in maintenance, network performance additional CPU — on two copies that are logically the same. Pushing BI down to the lower layers to run at the data is what motivated Big Data in the first place.

    Second Commandment: Thou Shalt Not Steal!...Or Violate Corporate Security Policy
    Security’s not optional. The sadly regular drumbeat of data breaches shows it’s not easy, either. Look for BI tools that can leverage the security model that’s already in place. Big Data can make this easier, with unified security systems like Ranger, Sentry and Knox; even Mongo has an amazing security architecture now. All these models allow you to plug right in, propagate user information all the way up to the application layer, and enforce a visualization’s authorization and the data lineage associated with it along the way. Security as a service: use it.

    Third Commandment: Thou Shalt Not Pay for Each User, Nor Every Gigabyte
    One of the fundamental beauties of Big Data is that when done right, it can be extremely cost effective. Putting five petabytes of data into Oracle could break the bank; but you can do just that in a big data system. That said, there are certain price traps you should watch out for before you buy. Some BI applications charge users by the gigabyte, or by gigabyte indexed. Caveat emptor! It’s totally common to have geometric, exponential, logarithmic growth in data and in adoption with big data. Our customers have seen deployments grow from tens of billions of entries to hundreds of billions in a matter of months, with a user base up by 50x. That’s another beauty of big data systems: Incremental scalability. Make sure you don’t get lowballed into a BI tool that penalizes your upside.

    Fourth Commandment: Thou Shalt Covet Thy Neighbor’s VisualizationsSharing static charts and graphs? We’ve all done it: Publishing PDFs, exporting to PNGs, email attachments, etc. But with big data and BI, static won’t cut it: All you have is pretty pictures. You should be able let anyone you want interact with your data. Think of visualizations as interactive roadmaps for navigating data; why should only one person take the journey? Publishing interactive visualizations is only the first step. Look ahead to the Github model. Rather than “Here’s your final published product,” get “Here is a Viz, make a clone, fork it, and this is how I derived at those insights, and see what other problem domains it applies to.” It lets others learn from your insights.

    Fifth Commandment: Thou Shalt Analyze Thy Data In Its Natural Form
    Too often, I hear people referring to big data as “unstructured.” It’s far more. Finance and sensors generate tons of key value pairs. JSON — probably the trendiest data format of all — can be semi-structured, multi-structured, etc. MongoDB has made a huge bet on making sure data should stay in this format: Beyond its virtues for performance and scalability reasons, expressiveness gets lost when you convert it into the rows and tables. And lots of big data is still created in tables, often with thousands of columns. And you’re going to have to do relational joins over all of it: “Select this from there when that...” Flattening can destroy critical relationships expressed in the original structure. Stay away from BI solutions that tell you “please transform your data into a pretty table because that’s the way we’ve always done it.”

    Sixth Commandment: Thou Shalt Not Wait Endlessly For Thine ResultsIn 2016 we expect things to be fast. One classic approach is OLAP cubes, essentially moving the data into a pre-computed cache, to get good performance. The problem is you have to extract and move data to build the cube before you get performance (see Commandment #1). Now, this can work pretty well at a certain scale... until the temp table becomes gigantic and crashes your laptop by trying to materialize it locally. New data will stop analysis in its tracks while you extract that data to rebuild the cache. Be wary of sampling too, you may end up building a visualization that looks great and performs well before you realize it’s all wrong because you didn’t have the whole picture. Instead, look for BI tools that make it easy to continuously change which data you are looking at.

    Seventh Commandment: Thou Shalt Not Build Reports, But Apps Instead
    For too long, ‘getting the data’ meant getting a report. In big data, BI users want asynchronous data from multiple sources so they don’t need to refresh anything — just like anything else that runs in browsers and on mobile devices. Users want to interact with the visual elements to get the answers they’re looking for, not just cross-filtering the results you already gave them. Frameworks like Rails made it easier to build Web applications. Why not do the same with BI apps? No good reason not to take a similar approach to these apps, APIs, templates, reusability, and so on. It’s time to look at BI through the lens of modern web application development.

    Eighth Commandment: Thou Shalt Use Intelligent ToolsBI tools have proven themselves when it comes to recommending visualizations based on data. Now it’s time to do the same for automatic maintenance of models and caching, so your end user doesn’t have to worry about it. At big data scale, it’s almost impossible to live without it, there’s a wealth of information that can be gleaned from how users interact with the data and visuals, which modern tools should use to leverage the data network effects . Also, look for tools that have search built in for everything, because I’ve seen customers who literally have thousands of visualizations they’ve built out. You need a way to quickly look for results, and with the web we’ve been trained to search instead of digging through menus.

    Ninth Commandment: Thou Shalt Go Beyond The Basics
    Today’s big data systems are known for predictive analytical horsepower. Correlation, forecasting, and more, all make advanced analytics more accessible than ever to business users. Delivering visualizations that can crank through big data without requiring programming experience empowers analysts and gets beyond a simple fixation on ‘up and to the right.’ To realize its true potential, big data shouldn’t have to rely on everyone becoming an R programmer. Humans are quite good at dealing with visual information; we just have to work harder to deliver it to them that way.

    Tenth Commandment: Thou Shalt Not Just Stand There On the Shore of the Data Lake Waiting for a Data Scientist To Do the WorkWhether you approach Big Data as a data lake or an enterprise data hub, Hadoop has changed the speed and cost of data and we’re all helping to create more of it every day. But when it comes to actually using big data for business users, it is too often a write-only system: Data created by the many is only used by the few.

    Business users have a ton of questions that can be answered with data in Hadoop. Business Intelligence is about building applications that deliver that data visually, in the context of day-to-day decision making. The bottom line is that everyone in an organization wants to make data-driven decisions. It would be a terrible shame to limit all the questions that big data can answer to those that need a data scientist to tackle them.

     Source: Datanami

  • The big data race reaches the City

    coloured-high-end-data-cables-large transEduPGWXTgvtbFyMaMlYatm4ovIMMP 5WSTNAIgCzTy4

    Vast amounts of information are being sifted for the good of commercial interests as never before

    IBM’s Watson supercomputer, once known for winning the television quiz show Jeopardy! in 2011, is now sold to wealth management companies as an affordable way to dispense investment advice. Twitter has introduced “cashtags” to its stream of social chatter so that investors can track what is said about stocks. Hedge funds are sending up satellites to monitor crop yields before even the farmers know how they’re doing.

    The world is awash with information as never before. According to IBM, 90pc of all existing data was created in the past two years. Once the preserve of academics and the geekiest hedge fund managers, the ability to harness huge amounts of noise and turn it into trading signals is now reaching the core of the financial industry.

    Last year was one of the toughest since the financial crisis for asset managers, according to BCG partner Ben Sheridan, yet they have continued to spend on data management in the hope of finding an edge in subdued markets.

     
    “It’s to bring new data assets to bear on some of the questions that asset managers have always asked, like macroeconomic movements,” he said.

    “Historically, these quantitative data aspects have been the domain of a small sector of hedge funds. Now it’s going to a much more mainstream side of asset managers.”

     
    59823675 The headquarters of HSBC Holdings Plc left No 1 Canada Square or Canary Wharf Tower cen-large transgsaO8O78rhmZrDxTlQBjdEbgHFEZVI1Pljic pW9c90 
    Banks are among the biggest investors in big data

    Even Goldman Sachs has entered the race for data, leading a $15m investment round in Kensho, which stockpiles data around major world events and lets clients apply the lessons it learns to new situations. Say there’s a hurricane striking the Gulf of Mexico: Kensho might have ideas on what this means for US jobs data six months afterwards, and how that affects the S&P stock index.

    Many businesses are using computing firepower to supercharge old techniques. Hedge funds such as Winton Capital already collate obscure data sets such as wheat prices going back nearly 1,000 years, in the hope of finding patterns that will inform the future value of commodities.

    Others are paying companies such as Planet Labs to monitor crops via satellite almost in real time, offering a hint of the yields to come. Spotting traffic jams outside Wal-Marts can help traders looking to bet on the success of Black Friday sales each year – and it’s easier to do this from space than sending analysts to car parks.

    Some funds, including Eagle Alpha, have been feeding transcripts of calls with company executives into a natural language processor – an area of artificial intelligence that the Turing test foresaw – to figure out if they have gained or lost confidence in their business. Trades might have had gut feelings about this before, but now they can get graphs.

    biggest spenders
     
     

    There is inevitably a lot of noise among these potential trading signals, which experts are trying to weed out.

    “Most of the breakthroughs in machine-learning aren’t in finance. The signal-to-noise ratio is a problem compared to something like recognising dogs in a photograph,” said Dr Anthony Ledford, chief scientist for the computer-driven hedge fund Man AHL.

    “There is no golden indicator of what’s going to happen tomorrow. What we’re doing is trying to harness a very small edge and doing it over a long period in a large number of markets.”

    The statistics expert said the plunging cost of computer power and data storage, crossed with a “quite extraordinary” proliferation of recorded data, have helped breathe life into concepts like artificial intelligence for big investors.

    “The trading phase at the moment is making better use of the signals we already know about. But the next research stage is, can we use machine learning to identify new features?”

    AHL’s systematic funds comb through 2bn price updates on their busiest days, up from 800m during last year’s peak.

    Developments in disciplines such as engineering and computer science have contributed to the field, according to the former academic based in Oxford, where Man Group this week jointly sponsored a new research professorship in machine learning at the university.

    google-driverless 3147440b 1-large transpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa RXJU8
    The artificial intelligence used in driverless cars could have applications in finance

    Dr Ledford said the technology has applications in driverless cars, which must learn how to drive in novel conditions, and identifying stars from telescope images. Indeed, he has adapted the methods used in the Zooniverse project, which asked thousands of volunteers to help teach a computer to spot supernovae, to build a new way of spotting useful trends in the City’s daily avalanche of analyst research.

    “The core use is being able to extract patterns from data without specifically telling the algorithms what patterns we are looking for. Previously, you would define the shape of the model and apply it to the data,” he said.

    These technologies are not just been put to work in the financial markets. Several law firms are using natural language processing to carry out some of the drudgery, including poring over repetitive contracts.

    Slaughter & May has recently adopted Luminance, a due diligence programme that is backed by Mike Lynch, former boss of the computing group Autonomy.

    Freshfields has spent a year teaching a customised system known as Kira to understand the nuances of contract terms that often occur in its business.

    Its lawyers have fed the computer documents they are reading, highlighting the parts they think are crucial. Kira can now parse a contract and find the relevant paragraphs between 40pc and 70pc faster than a human lawyer reviewing it by hand.

    “It kicks out strange things sometimes, irrelevancies that lawyers then need to clean up. We’re used to seeing perfect results, so we’ve had to teach people that you can’t just set the machine running and leave it alone,” said Isabel Parker, head of innovations at the firm.

    “I don’t think it will ever be a standalone product. It’s a tool to be used to enhance our productivity, rather than replace individuals.”

    The system is built to learn any Latin script, and Freshfields’ lawyers are now teaching it to work on other languages. “I think our lawyers are becoming more and more used to it as they understand its possibilities,” she added.

    Insurers are also spending heavily on big data fed by new products such as telematics, which track a customer’s driving style in minute detail, to help give a fair price to each customer. “The main driver of this is the customer experience,” said Darren Price, group chief information officer at RSA.

    The insurer is keeping its technology work largely in-house, unlike rival Aviva, which has made much of its partnerships with start-up companies in its “digital garage”. Allianz recently acquired the robo-adviser Moneyfarm, and Axa’s venture fund has invested in a chat-robot named Gasolead.

    EY, the professional services firm, is also investing in analytics tools that can flag red flags for its clients in particular countries or businesses, enabling managers to react before an accounting problem spreads.

    Even the Financial Conduct Authority is getting in on the act. Having given its blessing to the insurance sector’s use of big data, it is also experimenting with a “sandbox”, or a digital safe space where their tech experts and outside start-ups can use real-life data to play with new ideas.

    The advances that catch on throughout the financial world could create a more efficient industry – and with that tends to come job cuts. The Bank of England warned a year ago that as many as 15m UK jobs were at risk from smart machines, with sales staff and accountants especially vulnerable.

    “Financial services are playing catch-up compared to some of the retail-focused businesses. They are having to do so rapidly, partly due to client demand but also because there are new challengers and disruptors in the industry,” said Amanda Foster, head of financial services at the recruiter Russell Reynolds Associates.

    But City firms, for all their cost pressures, are not ready to replace their fund managers with robots, she said. “There’s still the art of making an investment decision, but it’s about using analytics and data to inform those decisions.”

    Source: Telegraph.co.uk, October 8, 2016

     

     

EasyTagCloud v2.8