8 items tagged "IBM"

  • Data science plays key role in COVID-19 research through supercomputers

    Data science plays key role in COVID-19 research through supercomputers

    Supercomputers, AI and high-end analytic tools are each playing a key role in the race to find answers, treatments and a cure for the widespread COVID-19.

    In the race to flatten the curve of COVID-19, high-profile tech companies are banking on supercomputers. IBM has teamed up with other firms, universities and federal agencies to launch the COVID-19 High Performance Computing Consortium.

    This consortium has brought together massive computing power in order to assist researchers working on COVID-19 treatments and potential cures. In total, the 16 systems in the consortium will offer researchers over 330 petaflops, 775,000 CPU cores and 34,000 GPUs and counting.

    COVID-19 High performance computing consortium

    The consortium aims to give supercomputer access to scientists, medical researchers and government agencies working on the coronavirus crisis. IBM said its powerful Summit supercomputer has already helped researchers at the Oak Ridge National Laboratory and the University of Tennessee screen 8,000 compounds to find those most likely to bind to the main "spike" protein of the coronavirus, rendering it unable to infect host cells.

    "They were able to recommend the 77 promising small-molecule drug compounds that could now be experimentally tested," Dario Gil, director of IBM Research, said in a post. "This is the power of accelerating discovery through computation."

    In conjunction with IBM, the White House Office of Science and Technology Policy, the U.S. Department of Energy, the National Science Foundation, NASA, nearly a dozen universities, and several other tech companies and laboratories are all involved.

    The work of the consortium offers an unprecedented back end of supercomputer performance that researchers can leverage while using AI to parse through massive databases to get at the precise information they're after, Tim Bajarin, analyst and president of Creative Strategies, said.

    Supercomputing powered by sharing big databases

    Bajarin said that the world of research is fundamentally done in pockets which creates a lot of insulated, personalized and proprietary big databases.

    "It will take incredible cooperation for Big Pharma to share their research data with other companies in an effort to create a cure or a vaccine," Bajarin added.

    Gil said IBM is working with consortium partners to evaluate proposals from researchers around the world and will provide access to supercomputing capacity for the projects that can have the most immediate impact.

    Many enterprises are coming together to share big data and individual databases with researchers.  

    Signals Analytics released a COVID-19 Playbook that offers access to critical market intelligence and trends surrounding potential treatments for COVID-19. The COVID-19 Playbook is available at no cost to researchers looking to monitor vaccines that are in development for the disease and other strains of coronavirus, monitor drugs that are being tested for COVID-19 and as a tool to assess which drugs are being repurposed to help people infected with the virus.

    "We've added a very specific COVID-19 offering so researchers don't have to build their own taxonomy or data sources and can use it off the shelf," said Frances Zelazny, chief marketing officer at Signals Analytics.

    Eschewing raw computing power for targeted, critical insights

    With the rapid spread of the virus and the death count rising, treatment options can't come soon enough. Raw compute power is important, but perhaps equally as crucial is being able to know what to ask and quickly analyze results.

    "AI can be a valuable tool for analyzing the behavior and spread of the coronavirus, as well as current research projects and papers that might provide insights into how best to battle COVID-19," Charles King, president of the Pund-IT analysis firm, said.

    The COVID-19 consortium includes research requiring complex calculations in& epidemiology and bioinformatics. While the high computing power allows for rapid model testing and large data processing, the predictive analytics have to be proactively applied to health IT.

    Dealing with COVID-19 is about predicting for the immediate, imminent future - from beds necessary in ICUs to social distancing timelines. In the long term, Bajarin would like to see analytic and predictive AI used as soon as possible to head off future pandemics.

    "We've known about this for quite a while - COVID-19 is a mutation of SARS. Proper trend analysis of medical results going forward could help head off the next great pandemic," Bajarin said.

    Author: David Needle

    Source: TechTarget

  • IBM expands in automation with the acquisition of MyInvenio

    IBM expands in automation with the acquisition of MyInvenio

    IBM could use MyInvenio's process mining platform to direct its customers to its automation and AI portfolio, a practice that other technology vendors are using.

    IBM will acquire process mining vendor MyInvenio in a bid to further build out the technology giant's automation portfolio.

    The acquisition, made public on April 15, builds on an existing partnership between IBM and MyInvenio. Since December of last year, IBM has integrated MyInvenio's process mining and task mining technology into IBM Cloud Pak for Automation, a platform for building and running automation applications.

    Expanding the automation portfolio

    IBM's acquisition of the myInvenio, which is based in Reggio Emilia, Italy, gives IBM the chance to integrate MyInvenio technology more deeply into its Cloud Pak for Automation platform, said Neil Ward-Dutton, vice president of IDC's intelligent business execution practice.

    "IBM wants to make the MyInvenio technology a seamless part of its automation platform, and wants to add process mining, analytics and optimization from MyInvenio as core parts of its business and IT automation proposition," he said.

    MyInvenio's platform automatically analyzes users' business data to identify tasks that could benefit from automation and surface bottlenecks and other inefficiencies that users could improve. Its technology can analyze user interaction data to determine where robotic process automation (RPA) bots and other automation could benefit its users.

    MyInvenio's technology could complement IBM's existing automation platform in a few ways, including by helping clients identify opportunities to automate with RPA or automate the generation of business rules, Ward-Dutton said.

    "Longer term, MyInvenio's technology could also be used to provide a kind of ongoing operational insight capabilities for business, IT and network management processes, highlighting potential problems and proactively suggesting fixes," he added.

    For IBM, an obvious choice is to integrate output from MyInvenio with IBM Blueworks Live, a cloud-based business process modeler, so that analysis could move directly into a modeling and documentation tool for process reengineering, said Forrester analyst Rob Koplowitz.

    "This can be done today by importing a BPMN (business process model and notation) model, but it will likely become more tightly integrated and easier to use," he added.

    RPA and process mining

    IBM's acquisition of MyInvenio builds on IBM's acquisition of Brazilian RPA vendor WDG Automation in July. At the time, IBM said it planned to integrate more than 600 prebuilt RPA functions from WDG Automation into its Cloud Pak products, beginning with Cloud Pak for Automation.  

    IBM's move with MyInvenio "totally builds on IBM's automation strategy, including, but not exclusive to, RPA," Koplowitz said. "Once automation is determined to be the right approach to optimize a process, easy provisioning of technology streamlines the process and directs a customer to IBM's tech stack."

    Koplowitz added that he's concerned, though, that process mining is "increasingly becoming a feeder for a presumed automation solution."

    So, with process mining, an enterprise using the technology could surface where automation could be useful, which in turn would provide the vendor -- in this case, IBM -- with opportunities to sell more services to automate that process.

    "The automation market is much larger than the process mining market, so anything that can help a vendor feed that beast becomes compelling [to them]," he said.

    The industry saw that, to a certain extent, with UiPath's acquisition of ProcessGold, a process mining vendor based in the Netherlands, and with SAP's acquisition of Signavio earlier this year, Koplowitz noted. Now, with MyInvenio, IBM is making a similar move.

    Still, MyInvenio might be one of a few automation acquisitions IBM makes in the short term.

    Ward-Dutton noted that IBM aims to build out the broadest possible automation platform. He said it wouldn't surprise him if the tech giant made one or two more acquisitions to this platform in the coming year or so.

    IBM did not disclose the financial details of its MyInvenio deal.

    Author: Mark Labbe

    Source: TechTarget

  • IBM verdient beter aan lagere omzet


    Valutaire tegenwind, het afstoten van bedrijfsonderdelen en lagere marges in groeisegment de cloud hebben de omzet van IBM op bijna alle fronten doen dalen. Maar IBM hield wel meer over aan zijn bedrijfsactiviteiten

    IBM rapporteerde over het vierde kwartaal een omzet van 24,1 miljard dollar. Dat was bijna 12 procent minder dan in het vierde kwartaal van 2013. Ook over het hele jaar opgeteld daalde de omzet. Met 92,8 miljard dollar kwam de omzet in 2014 5,7 procent lager uit dan in 2013.

    De omzetdaling lijkt spectaculairder dan die in feite is. In 2013 had IBM in verschillende divisies nog inkomsten van de System X-divisie - die het afgelopen jaar werd verkocht aan Lenovo. Daarnaast treden er verschillen op door de uitbesteding van klantenservice en door de koersstijging van de dollar. Gecorrigeerd voor die factoren bleef de daling in het vierde kwartaal beperkt tot 2 procent. Wat de invloed ervan was op de jaaromzet, specificeert IBM niet.

    Cloud eist zijn tol
    Dat IBM ook kampt met structurele veranderingen in de markt, blijkt echter uit het wel en wee van de divisie software. Omzetdaling was in die divisie jarenlang ondenkbaar. Maar in het vierde kwartaal daalde de omzet daar met een kleine 7 procent. Gecorrigeerd voor valutaschommelingen resteert nog altijd een min van 3 procent. Over het gehele jaar gerekend nam de omzet in software met 2 procent af. Die negatieve ontwikkeling reflecteert de opkomst van cloud. Met clouddiensten haalde IBM in 2014 7 miljard dollar binnen, 60 procent meer dan in 2013, maar de schaal van zijn clouddiensten is naar eigen zeggen nog onvoldoende om de lagere marges te compenseren.

    Hogere brutomarge, lagere nettowinst
    Desalniettemin wist IBM het bedrijfsresultaat als percentage van de omzet nog iets op te schroeven. De bruto winstmarge over 2014 was 50 procent, die over 2013 was 49,5 procent. In het vierde kwartaal was het gat met 2013 nog iets groter: 53,3 om 52,4 procent.

    Die toegenomen efficiëntie vertaalde zich niet in een hogere netto winst. Na belastingen, afschrijvingen en bijzondere lasten boekte IBM in het vierde kwartaal een netto winst van 5,5 miljard dollar, 11 procent minder dan in het vierde kwartaal van 2014. Over het hele jaar gerekend nam de netto winst zelfs met 27 procent af tot 12 miljard dollar. Behalve de valutaschommelingen spelen daarbij ook enkele bijzondere kostenposten een rol, zoals de kosten van inkrimp van de divisie micro-electronica en een voorziening van 580 miljoen dollar voor inkrimping van het personeelsbestand.

     

    Automatiseringsgids, 21 janauri 2015

  • IBM vestigt 'Watson Internet of Things' hoofdkantoor in München

    IBM-Watson-IoT-HQ-300x179IBM investeert meer dan 3 miljard dollar in IoT-campus en innovatiecentrum.

    De campus zal naar verwachting duizend IBM ontwikkelaars, adviseurs, wetenschappers en ontwerpers gaan huisvesten. Met de opening gaat een investering gepaard van meer dan 3 miljard dollar: de grootste investering van IBM in Europa in 20 jaar.

    'Het Internet of Things vormt op korte termijn de grootste databron ter wereld, terwijl we momenteel met 90 procent van die data helemaal niets doen', zegt Harriet Green, general manager van IBM's nieuwe Watson IoT divisie. “Door deze data te koppelen aan IBM’s Watson technologie met haar unieke vaardigheden van op het gebied van waarneming, argumentatie en zelflerend vermogen, opent dit deuren voor bedrijven, overheden en individuen om verbanden uit hun data te halen die tot nieuwe inzichten leiden.'

    Naast het nieuwe hoofdkantoor lanceert IBM tevens vier nieuwe Watson API’s op de Watson IoT cloud:

    •Natural language processing (NLP) API. Stelt gebruikers in staat om te communiceren met toestellen en systemen via menselijke taal. Watson linkt taal aan andere databronnen die voor de juiste context zorgen. Zo kan een onderhoudsmonteur aan Watson vragen wat een bepaalde trilling in een toestel veroorzaakt. Watson zal automatisch de bedoeling achter de vraag herkennen en linken aan databronnen over bijvoorbeeld het onderhoud van het toestel en een aanbeveling doen om de trilling weg te werken.

    •Machine Learning Watson API. Automatiseert dataprocessen en monitort datasets op continue basis. Daardoor kan het systeem trends waarnemen en aanbevelingen doen wanneer een probleem zich voordoet.

    •Video and Image Analytics API. Doorzoekt ongestructureerde datasets van afbeeldingen en video’s en kan hieruit trends en patronen identificeren.

    •Text analytics API. Biedt de mogelijkheid om grote hoeveelheden tekst door te ploegen en patronen te herkennen. Denk bijvoorbeeld aan de transcripten van call centers, tweets en blogs.

     

    Source: Adformatie

  • IBM: Hadoop solutions and the data lakes of the future

    IBM: Hadoop solutions and the data lakes of the future

    The foundation of the AI Ladder is Information Architecture. The modern data-driven enterprise needs to leverage the right tools to collect, organize, and analyze their data before they can infuse their business with the results.

    Businesses have many types of data and many ways to apply it. We must look for approaches to manage all forms of data, regardless of techniques (e.g., relational, map reduce) or use case (e.g., analytics, business intelligence, business process automation). Data must be stored securely and reliably, while minimizing costs and maximizing utility.

    Object storage is the ideal place to collect, store, and manage data assets which will be used to generate business value.

    Object storage started as an archive

    Object storage was first conceived as a simplification: How can we remove the extraneous functions seen in file systems to make storage more scalable, reliable, and low-cost. Technologies like erasure encoding massively reduced costs by allowing reliable storage to be built on cheap commodity hardware. The interface was simple: a uniform limitless namespace, atomic data writes, all available over HTTP.

    And, object storage excels at data storage. For example, IBM Cloud Object Storage is designed for over 10 9’s of durability, has robust data protection and security features, flexible pricing, and is natively integrated with IBM Aspera on Cloud high-speed data transfer.

    The first use cases were obvious: Object storage was a more scalable file system that could be used to store unstructured data like music, images, and video. Or to store backup files, database dumps, and log files. Its single-namespace, data tiering options allow it to be used for data archiving, and its HTTP interface makes it convenient to serve static website content as part of cloud native applications.

    But, beyond that, it was just a data dump.

    Map reduce and the rise of Hadoop solutions

    At the same time object storage was replacing file system use cases, the map reduce programming model was emerging in data analytics. Apache Hadoop provided a software framework for processing big data workloads that traditional relational database management system (RDBMS) solutions could not effectively manage. Data scientists and analysts had to give up declarative data management and SQL queries, but gained the ability to work with exponentially larger data sets and the freedom to explore unstructured and semi-structured data.

    In the beginning, Hadoop was seen by some as a backwards step. It achieved a measure of scale and cost savings but gave up much of what made RDBMS systems so powerful and easy to work with. While not requiring schemas added flexibility, query latencies and overall performance decreased. But the Hadoop ecosystem has continued to expand and meet user needs. Spark massively increased performance, and Hive provided SQL query.

    Hadoop is not suitable for everything. Transactional processing is still a better fit for RDBMS. Businesses must use the appropriate technology for their various OLAP and OTLP needs.

    HDFS became the de facto data lake for many enterprises

    Like object storage, Hadoop was also designed as a scale-out architecture on cheap commodity hardware. The Hadoop File System (HDFS) began with the premise that compute should be moved to the data. It was designed to place data on locally attached storage on the compute nodes themselves. Data was stored in a form that could be directly read by locally running tasks, without a network hop.

    While this is beneficial for many types of workloads, it wasn’t the end of ETL envisioned by some. By placing readable copies of data directly on compute nodes, HDFS can’t take advantage of erasure coding to save costs. When data reliability is required, replication is just not as cost-effective. Furthermore, it can’t be independently scaled from compute. As workloads diversified, this inflexibility caused management and cost issues. For many jobs, network wasn’t actually the bottleneck. Compute bottlenecks are typically CPU or memory issues, and storage bottlenecks are typically hard drive spindle related, either disk throughput or seek constrained.

    When you separate compute from storage, both sides benefit. Compute nodes are cheaper because they don’t need large amounts of storage, can be scaled up or down quickly without massive data migration costs, and jobs can even be run in isolation when necessary.

    For storage, you want to spread out your load onto as many spindles as possible, so using a smaller active data set on a large data pool is beneficial, and your dedicated storage budget can be reserved for smaller flash clusters when latency is really an issue.

    The scale and cost savings of Hadoop attracted many businesses to use it wherever possible, and many businesses ended up using HDFS as their primary data store. This has led to cost, manageability, and flexibility issues.

    The data lake of the future supports all compute workloads

    Object storage was always useful as a way of backing up your databases, and it could be used to offload data from HDFS to a lower-cost tier as well. In fact, this is the first thing enterprises typically do.

    But, a data lake is more than just a data dump.

    A data lake is a place to collect an organization’s data for future use. Yes, it needs to store and protect data in a highly scalable, secure, and cost-effective manner, and object storage has always provided this. But when data is stored in the data lake, it is often not known how it will be used and how it will be turned into value. Thus, it is essential that the data lake include good integration with a range of data processing, analytics, and AI tools.

    Typical tools will not only include big data tools such as Hadoop, Spark, and Hive, but also deep learning frameworks (such as TensorFlow) and analytics tools (such as Pandas). In addition, it is essential for a data lake to support tools for cataloging, messaging, and transforming the data to support exploration and repurposing of data assets.

    Object storage can store data in formats native to big data and analytics tools. Your Hadoop and Spark jobs can directly access object storage through the S3a or Stocator connectors using the IBM Analytics Engine. IBM uses these techniques against IBM Cloud Object Storage for operational and analytic needs.

    Just like with Hadoop, you can also leverage object storage to directly perform SQL queries. The IBM Cloud SQL Query service uses Spark internally to perform ad-hoc and OLAP queries against data stored directly in IBM Cloud Object Storage buckets.

    TensorFlow can also be used to train and deploy ML models directly using data in object storage.

    This is the true future of object storage

    As organizations look to modernize their information architecture, they save money when they build their data lake with object storage.

    For existing HDFS shops, this can be done incrementally, but you need to make sure to keep your data carefully organized as you do so to take full advantage of the rich capabilities available now and in the future.

    Author: Wesly Leggette & Michael Factor

    Source: IBM

  • Research details developments in the business intelligence (BI) market that is estimated to grow at 10% CAGR to 2020

    HOIThe global business intelligence market report, an analyst says In the past few years, social media has played critical roles in SMEs and mid-sized organizations. Many SMEs are increasingly embracing this trend and integrating their BI software with social media platforms.

    Market outlook of business intelligence market - market research analyst predicts the global business intelligence market to grow at a CAGR of around 10% during the forecast period. The growing adoption of data analytics by organizations worldwide is a key driver for the growth of this market.

    The majority of corporate data sources include data generated from enterprise applications along with newly generated cloud-based and social network data. business intelligence tools are useful in the retrieval and analysis of this vast and growing volume of discrete data.

    They also help optimize business decisions, discover significant weak signals, and develop indicator patterns to identify opportunities and threats for businesses.

    The increased acceptance of cloud BI solutions by SMEs is also boosting the growth of this market. The adoption of cloud services allows end-users to concentrate on core activities rather than managing their IT environment.

    Cloud BI solutions enable applications to be scaled quickly, can be easily integrated with easy integration with third-party applications, and provide security at all levels of the enterprise IT architecture so that these applications can be accessed remotely.

    Market segmentation by technology of the business intelligence market:

    • Traditional BI
    • Mobile BI
    • Cloud BI
    • Social BI

    The mobile BI segment accounts for approximately 20% of the global BI market. It enables the mobile workforce to get business insights by data analysis, using applications optimized for mobile and smart devices.

    The growing smartphone adoption is likely to emerge as a key growth driver for this segment during the forecast period.

    Market segmentation by deployment of the business intelligence market

    • Cloud BI
    • On-premises BI

    The on-premise segment accounted for 86% of the market share during 2015. However, the report anticipates this segment to witness a decline in its shares by the end of the forecast period.

    In this segment, the software is purchased and installed on the server of an enterprise. It requires more maintenance but is highly secure and easy to manage.

    Geographical segmentation of the BI market

    • Americas
    • APAC
    • EMEA

    The Americas dominated the market during 2015, with a market share of around 56%. The high adoption of cloud BI solutions in this region is the major growth contributor for this market.

    The US is the market leader in this region as most of the key vendors are based out of here.

    Competitive landscape and key vendors

    Microsoft is one of the largest BI vendors and offers Power BI, which helps to deliver business-user-oriented, self-service data preparation and analysis needs through Excel 2013 and Office 365. The competitive environment in this market is expected to intensify during the forecast period due to an increase in R&D innovations and mergers.

    The market is also expected to witness a growing trend of acquisitions by the leading players. The key players in the market are expected to diversify their geographical presence during the forecast period.

    The key vendors of the market are -

    • IBM
    • Microsoft
    • Oracle
    • SAP
    • SAS Institute

    Other prominent vendors in the market include Actuate, Alteryx, Board International, Brist, Datawatch, GoodData, Infor, Information Builders, Logi Analytics, MicroStrategy, Panorama Software, Pentaho, Prognoz, Pyramid Analytics, Qlik, Salient Management Company, Tableau, Targit, Tibco Software, and Yellowfin.

    Key questions answered in the report

    • What will the market size and the growth rate be in 2020?
    • What are the key factors driving the BI market?
    • What are the key market trends impacting the growth of the BI market?
    • What are the challenges to market growth?
    • Who are the key vendors in the global BI market?
    • What are the market opportunities and threats faced by the vendors in the BI market?
    • Trending factors influencing the market shares of the Americas, APAC, and EMEA?
    • What are the key outcomes of the five forces analysis of the BI market?

    Source: WhaTech

  • The ability to speed up the training for deep learning networks used for AI through chunking

    The ability to speed up the training for deep learning networks used for AI through chunking

    At the International Conference on Learning Representations on May 6, IBM Research shared a look around how chunk-based accumulation can speed the training for deep learning networks used for artificial intelligence (AI)

    The company first shared the concept and its vast potential at last year’s NeurIPS conference, when it demonstrated the ability to train deep learning models with 8-bit precision while fully preserving model accuracy across all major AI data set categories: image, speech and text. The result? This technique could accelerate training time for deep neural networks by two to four times over today’s 16-bit systems.

    In IBM Research’s new paper, titled 'Accumulation Bit-Width Scaling For Ultralow Precision Training of Deep Networks', researchers explain in greater depth exactly how the concept of chunk-based accumulation works to lower the precision of accumulation from 32-bits down to 16-bits. 'Chunking' takes the product and divides it into smaller groups of accumulation and then adds the result of each of these smaller groups together, leading to a significantly more accurate result than that of normal accumulation. This allows researchers to study new networks and improve the overall efficiency of deep learning hardware.

    Although this approach was previously considered infeasible to further reduce precision for training, IBM expects this 8-bit training platform to become a widely adopted industry standard in the coming years.

    Author: Daniel Gutierrez

    Source: Insidebigdata

  • The big data race reaches the City

    coloured-high-end-data-cables-large transEduPGWXTgvtbFyMaMlYatm4ovIMMP 5WSTNAIgCzTy4

    Vast amounts of information are being sifted for the good of commercial interests as never before

    IBM’s Watson supercomputer, once known for winning the television quiz show Jeopardy! in 2011, is now sold to wealth management companies as an affordable way to dispense investment advice. Twitter has introduced “cashtags” to its stream of social chatter so that investors can track what is said about stocks. Hedge funds are sending up satellites to monitor crop yields before even the farmers know how they’re doing.

    The world is awash with information as never before. According to IBM, 90pc of all existing data was created in the past two years. Once the preserve of academics and the geekiest hedge fund managers, the ability to harness huge amounts of noise and turn it into trading signals is now reaching the core of the financial industry.

    Last year was one of the toughest since the financial crisis for asset managers, according to BCG partner Ben Sheridan, yet they have continued to spend on data management in the hope of finding an edge in subdued markets.

     
    “It’s to bring new data assets to bear on some of the questions that asset managers have always asked, like macroeconomic movements,” he said.

    “Historically, these quantitative data aspects have been the domain of a small sector of hedge funds. Now it’s going to a much more mainstream side of asset managers.”

     
    59823675 The headquarters of HSBC Holdings Plc left No 1 Canada Square or Canary Wharf Tower cen-large transgsaO8O78rhmZrDxTlQBjdEbgHFEZVI1Pljic pW9c90 
    Banks are among the biggest investors in big data

    Even Goldman Sachs has entered the race for data, leading a $15m investment round in Kensho, which stockpiles data around major world events and lets clients apply the lessons it learns to new situations. Say there’s a hurricane striking the Gulf of Mexico: Kensho might have ideas on what this means for US jobs data six months afterwards, and how that affects the S&P stock index.

    Many businesses are using computing firepower to supercharge old techniques. Hedge funds such as Winton Capital already collate obscure data sets such as wheat prices going back nearly 1,000 years, in the hope of finding patterns that will inform the future value of commodities.

    Others are paying companies such as Planet Labs to monitor crops via satellite almost in real time, offering a hint of the yields to come. Spotting traffic jams outside Wal-Marts can help traders looking to bet on the success of Black Friday sales each year – and it’s easier to do this from space than sending analysts to car parks.

    Some funds, including Eagle Alpha, have been feeding transcripts of calls with company executives into a natural language processor – an area of artificial intelligence that the Turing test foresaw – to figure out if they have gained or lost confidence in their business. Trades might have had gut feelings about this before, but now they can get graphs.

    biggest spenders
     
     

    There is inevitably a lot of noise among these potential trading signals, which experts are trying to weed out.

    “Most of the breakthroughs in machine-learning aren’t in finance. The signal-to-noise ratio is a problem compared to something like recognising dogs in a photograph,” said Dr Anthony Ledford, chief scientist for the computer-driven hedge fund Man AHL.

    “There is no golden indicator of what’s going to happen tomorrow. What we’re doing is trying to harness a very small edge and doing it over a long period in a large number of markets.”

    The statistics expert said the plunging cost of computer power and data storage, crossed with a “quite extraordinary” proliferation of recorded data, have helped breathe life into concepts like artificial intelligence for big investors.

    “The trading phase at the moment is making better use of the signals we already know about. But the next research stage is, can we use machine learning to identify new features?”

    AHL’s systematic funds comb through 2bn price updates on their busiest days, up from 800m during last year’s peak.

    Developments in disciplines such as engineering and computer science have contributed to the field, according to the former academic based in Oxford, where Man Group this week jointly sponsored a new research professorship in machine learning at the university.

    google-driverless 3147440b 1-large transpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa RXJU8
    The artificial intelligence used in driverless cars could have applications in finance

    Dr Ledford said the technology has applications in driverless cars, which must learn how to drive in novel conditions, and identifying stars from telescope images. Indeed, he has adapted the methods used in the Zooniverse project, which asked thousands of volunteers to help teach a computer to spot supernovae, to build a new way of spotting useful trends in the City’s daily avalanche of analyst research.

    “The core use is being able to extract patterns from data without specifically telling the algorithms what patterns we are looking for. Previously, you would define the shape of the model and apply it to the data,” he said.

    These technologies are not just been put to work in the financial markets. Several law firms are using natural language processing to carry out some of the drudgery, including poring over repetitive contracts.

    Slaughter & May has recently adopted Luminance, a due diligence programme that is backed by Mike Lynch, former boss of the computing group Autonomy.

    Freshfields has spent a year teaching a customised system known as Kira to understand the nuances of contract terms that often occur in its business.

    Its lawyers have fed the computer documents they are reading, highlighting the parts they think are crucial. Kira can now parse a contract and find the relevant paragraphs between 40pc and 70pc faster than a human lawyer reviewing it by hand.

    “It kicks out strange things sometimes, irrelevancies that lawyers then need to clean up. We’re used to seeing perfect results, so we’ve had to teach people that you can’t just set the machine running and leave it alone,” said Isabel Parker, head of innovations at the firm.

    “I don’t think it will ever be a standalone product. It’s a tool to be used to enhance our productivity, rather than replace individuals.”

    The system is built to learn any Latin script, and Freshfields’ lawyers are now teaching it to work on other languages. “I think our lawyers are becoming more and more used to it as they understand its possibilities,” she added.

    Insurers are also spending heavily on big data fed by new products such as telematics, which track a customer’s driving style in minute detail, to help give a fair price to each customer. “The main driver of this is the customer experience,” said Darren Price, group chief information officer at RSA.

    The insurer is keeping its technology work largely in-house, unlike rival Aviva, which has made much of its partnerships with start-up companies in its “digital garage”. Allianz recently acquired the robo-adviser Moneyfarm, and Axa’s venture fund has invested in a chat-robot named Gasolead.

    EY, the professional services firm, is also investing in analytics tools that can flag red flags for its clients in particular countries or businesses, enabling managers to react before an accounting problem spreads.

    Even the Financial Conduct Authority is getting in on the act. Having given its blessing to the insurance sector’s use of big data, it is also experimenting with a “sandbox”, or a digital safe space where their tech experts and outside start-ups can use real-life data to play with new ideas.

    The advances that catch on throughout the financial world could create a more efficient industry – and with that tends to come job cuts. The Bank of England warned a year ago that as many as 15m UK jobs were at risk from smart machines, with sales staff and accountants especially vulnerable.

    “Financial services are playing catch-up compared to some of the retail-focused businesses. They are having to do so rapidly, partly due to client demand but also because there are new challengers and disruptors in the industry,” said Amanda Foster, head of financial services at the recruiter Russell Reynolds Associates.

    But City firms, for all their cost pressures, are not ready to replace their fund managers with robots, she said. “There’s still the art of making an investment decision, but it’s about using analytics and data to inform those decisions.”

    Source: Telegraph.co.uk, October 8, 2016

     

     

EasyTagCloud v2.8