8 items tagged "data management"

  • 4 Tips to help maximize the value of your data

    4 Tips to help maximize the value of your data

    Summer’s lease hath all too short a date.

    It always seems to pass by in the blink of an eye, and this year was no exception. Though I am excited for cooler temperatures and the prismatic colors of New England in the fall, I am sorry to see summer come to an end. The end of summer also means that kids are back in school, reunited with their friends and equipped with a bevy of new supplies for the new year. Our kids have the tools and supplies they need for success, why shouldn’t your business?

    This month’s Insights Beat focuses on additions to our team’s ever-growing body of research on new and emerging data and analytics technologies that help companies maximize the value of their data.

    Get real about real time

    Noel Yuhanna and Mike Gualtieri published a Now Tech article on translytical data platforms. Since we first introduced the term a few years ago, translytical data platforms have been a scorching hot topic in database technology. Enabling real-time insights is imperative in the age of the customer, and there are a number of vendors who can help you streamline your data management. Check out their new report for an overview of 18 key firms operating in this space, and look for a soon-to-be-published Forrester Wave™ evaluation in this space, as well.

    Don’t turn a blind eye to computer vision

    Interested in uncovering data insights from visual assets? Look no further than computer vision. While this technology has existed in one form or another for many years, development in convolutional neural networks reinvigorated computer vision R&D (and indeed established computer vision as the pseudo-progenitor of many exciting new AI technologies). Don’t turn a blind eye to computer vision just because you think it doesn’t apply to your business. Computer vision already has a proven track record for a wide variety of use cases. Kjell Carlsson published a New Tech report to help companies parse a diverse landscape of vendors and realize their (computer) vision.

    Humanize B2B with AI

    AI now touches on virtually all aspects of business. As techniques grow more and more sophisticated, so too do its use cases. Allison Snow explains how B2B insights pros can leverage emerging AI technologies to drive empathy, engagement, and emotion. Check out the full trilogy of reports and overview now. 

    Drive data literacy with data leadership

    Of course, disruptive changes to data strategy can be a hard sell, especially when your organization lacks the structural forces to advocate for new ideas. Jennifer Belissent, in a recent blog, makes the case for why data leadership is crucial to driving better data literacy. Stay tuned for her full report on data literacy coming soon. More than just

    leadership, data and analytics initiatives require investment, commitment, and an acceptance of disruption. No initiative will be perfect from the get-go, and it’s important to remember that analytics initiatives don’t usually come with a magician’s reveal.

    Author: Srividya Sridharan

    Source: Forrester

  • Data management: compliance, protection, and the role of IT

    Data management: compliance, protection, and the role of IT

    The business benefit of data and data-driven decisions cannot be undervalued, which is a widely agreed-upon mindset on today’s business landscape. At the same time, there are sensitivities around where that data comes from and how it’s being accessed or used. For this reason, data protection and privacy are the driving topics in today’s age and, for enterprise companies, essential to remaining an ongoing business concern.

    To ensure regulatory compliance and generate business value, any data coming into an organization needs to be confidentially handled, trusted and protected. Modern businesses also want their products to be cloud deployable, but many businesses have security concerns that come with sharing information in the cloud. It’s crucial that when you use data, you also protect it, preserve the integrity of original personal ownership and, maintain the privacy of the person to whom it belongs at all costs.

    The first level of data protection is to not collect personal data if there is no legitimate purpose in doing so. If personal data was collected and a legitimate purpose no longer exists, it must be deleted.

    The second level of data protection can be realized through a framework of technology measures: Identity and access management, patch management, separation of business purpose (disaggregation of legal entities), and encryption.

    IT teams often provide data in an encrypted format as a means to get people the information they need, without compromising sensitive information. People receiving the data don’t usually need to know every bit of data, they just want an aggregate of what the data looks like. And IT teams want to ensure that when they transfer important data assets, the information is secure.

    Additionally, when it comes to being data compliant, there are rules and regulations that businesses must follow, such as the General Data Protection Regulation (GDPR) and data protection and privacy agreements.

    GDPR harmonizes data protection regulation throughout the European Union and gives individuals more control over their data. It imposes expansive rules about processing data backed by powerful enforcement, so IT teams must ensure they are compliant. This creates an extra, guaranteed level of security for corporate and personal data, though it’s not without its complications for enterprises.

    Concretely, this means that companies have to technically ensure that only necessary sets move through ‘boundaryless’ end-to-end business scenarios. Here, we consider efficient data control in and through the context of comprehensive business processing for a declared purpose that is legally secured, including by consent of the individual that the data is related to.

    The business context and its technical rendering through customizing and configuration is central to the business capability of efficiently controlling data for purposes of data protection and privacy. Integrated services provide business context by showing information contained in any one data set that is linked to ordered business objects and business object types related to the data subject.

    Here, we have offered an embedded view of the data subject, which can be uniformly changed and managed in the context of a logical sequence of business events.

    Data management capabilities

    To further protect data and stay compliant, many IT teams have started with the approach of applying data management capabilities to encrypt and anonymize data without actually changing the data set. IT simply changes the way data is presented to ensure data is safe.

    One recent example is the adoption of the GDPR rules to be compliant with the legal regulations. In this case the data management capabilities must ensure that only the allowed data is shown and that protected personal data is hidden or deleted (information lifecycle management) without destroying required information and connections.

    By transitioning to what we call an 'intelligent organization', businesses can feed applications and processes with the data essential for the digital economy and intelligently connect people, data and processes safely and secure.

    Solutions offer customers comprehensive in-depth information about the places where their master data exists, which parts reside in which services, applications or systems, and how the data can be accessed, or they can even get direct access. Moreover, a clear picture of the complete master data set and all individual owners can be obtained, including rules for creating data consistency. This provides overall consistency, and the robustness that is required in a service-driven enterprise environment.

    Tiered levels of access

    Another tactic way of keeping data secure is for IT to work closely with each line of business to set tiered levels of access by creating a workflow scenario for first, second, third, and so on, access by individual persons to data within a specific line of business.

    In contrast to the more traditional model outlined above, IT teams can offer a tiered approach to authorization. Users have limited access based on transaction codes, organizational levels, etc., by assigning authorization roles through different lines of business.

    Best practices for data compliance and protection

    Both approaches outlined above allow businesses to review their data to determine the real value of it without compromising the security of the data.

    Overall, it’s important that data compliance is not only a tech topic, but a topic that should be discussed, rolled out, and followed company-wide. As 2019 comes to a close, companies must have a data compliance program in place, a data protection culture within their organizations and the ability for employees to understand the importance of change processes and tools to adhere to the new regulations.

    Including such aspects from the beginning can be a competitive advantage for companies and should be considered at an early stage. Not adhering to data protection and privacy rules and regulation can cause tremendous damage to a company’s image and reputation and can have a heavy financial impact.

    Author: Katrin Lehmann

    Source: Information-management

  • Data warehousing: ETL, ELT, and the use of big data

    Data warehousing: ETL, ELT, and the use of big data

    If your company keeps up with the trends in data management, you likely have encountered the concepts and definitions of data warehouse and big data. When your data professionals try to implement data extraction for your business, they need a data repository. For this purpose, they can use a data warehouse and a data lake.

    Roughly speaking, a data lake is mainly used to gather and preserve unstructured data, while a data warehouse is intended for structured and semi-structured data.

    Data warehouse modeling concepts

    All data in a data warehouse is well-organized, archived, and arranged in a particular way. Not all data that can be gathered from multiple sources reach a data warehouse. The source of data is crucial since it impacts the quality of data-driven insights and hence, business decisions.

    During the phase of data warehouse development, a lot of time and effort is needed to analyze data sources and select useful ones. It depends on the business processes, whether a data source has value or not. Data only gets into the warehouse when its value is confirmed.

    On top of that, the way data is represented in your database has a critical role. Concepts of data modeling in a data warehouse are a powerful expression of business requirements specific to a company. A data model determines how data scientists and software engineers will design, create, and implement a database.

    There are three basic types of modeling. Conceptual data model describes all entities a business needs information about. It provides facts about real-world things, customers, and other business-related objects and relations.
    The goal of creating this data model is to synthesize and store all the data needed to gain an understanding of the whole business. This model is designed for the business audience.

    Logical data model suits more in-depth data. It describes the structure of data elements, their attributes, and ways these elements interrelate. For instance, this model can be used to identify relationships between customers and products of interest for them. This model is characterized by a high level of clarity and accuracy.

    Physical data model describes specific data and relationships needed for a particular case as well as the way data model is used in database implementation. It provides a wealth of meta-data and facilitates visualizing the structure of a database. Meta-data can involve accesses, limitations, indexes, and other features.

    ELT and ETL data warehouse concepts

    Large amounts of data sorted for warehousing and analytics require a special approach. Businesses need to gather and process data to retrieve meaningful insights. Thus, data should be manageable, clean, and suitable for molding and transformation.

    ETL (extract, transform, load)and ELT (extract, load, transform) are the two approaches that have technological differences but serve the same purpose – to manage and analyze data.

    ETL is the paradigm that enables data extraction from multiple sources and pulling data into a single database to serve a business.

    At the first stage of the ETL process, engineers extract data from different databases and gather it in a single place. The collected data undergo transformation to take the form required for a target repository. Then the data come to a data warehouse or a target database.

    If to switch the letters 'T' and 'L', you get the ELT process. After the retrieval, the data can be loaded straight to the target database. The cloud technology enables large and scalable storage places, and massive datasets can be first loaded and then transformed as per the business requirements and needs.

    The ELT paradigm is a newer alternative to a well-established ETL process. It is flexible and allows fast processing speed to work with raw data. On the one hand, ELT requires special tools and frameworks, but on the other, it enables unlimited access to business data, thus saving BI and data analytics experts so much time.

    ETL testing concepts are also essential to ensure that data is loading in a data warehouse in a correct and accurate manner. This testing involves data verification at transitional phases. And before data reaches the destination, its quality and usefulness are already verified.

    Types of data warehouse for your company

    Different data warehouse concepts presuppose the use of particular techniques and tools to work with data. Basic data warehouse concepts also differ depending on a company’s size and purposes of using data.

    Enterprise data warehouse enables a unique approach to organizing, visualizing, and representing all the data across a company. Data can be classified by a subject and can be accessed based on this attribute.

    Data mart is a subcategory of a data warehouse designed for specific tasks in business areas such as retail, finance, and so forth. Data comes into a data mart straight from the sources.

    Operational data store satisfies the reporting needs within a company. It is updating in real time, which makes this solution best-suited for keeping in all business records.

    Big data and data warehouse ambiguity

    A data warehouse is an architecture that has proved to be valuable for data storing over the years. It involves data that has a defined value and can be used from the start to solve some business needs. Everyone can access this data, and the features of datasets are reliability and accuracy.

    Big data is a hyped field these days. It is the technology that allows retrieving data from heterogeneous sources. The key features of big data are volume, velocity or data streams, and a variety of data formats. Unlike a data warehouse, big data is a repository that can hold unstructured data as well.

    Companies seek to adopt custom big data solutions to unlock useful information that can help improve decision-making. These solutions help drive revenue, increase profitability, and cut customer churn thanks to the comprehensive information collected and available in one place.

    Data warehouse implementation entails advantages in terms of making informed decisions. It provides comprehensive insights into what is going on within a company, while big data can be in the shape of massive but disorganized datasets. However, big data can be later used for data warehousing.

    Running a data-driven business means dealing with billions of data on in-house, external operations, consumers, and regulations.

    Author: Katrine Spirina

    Source: In Data Labs

  • How data management can learn from basketball

    How data management can learn from basketball

    A data management plan in a company is not something that can be implemented in isolation by one department or a team in your organisation, it is rather a collective effort, similar to how different players perform in a basketball court.  

    From the smallest schoolyard to the biggest pro venue, from the simplest pickup game to the NBA finals, players, coaches, and even fans will tell you that having a game plan and sticking to it is crucial to winning. It makes sense; while all players bring their own talents to the contest, those talents have to be coordinated and utilized for the greater good. When players have real teamwork, they can accomplish things far beyond what they could achieve individually, even if they are nominally part of the squad. When team players aren’t displaying teamwork, they’re easy targets for competitors who know how to read their weaknesses and take advantage of them.

    Basketball has been used as an analogy for many aspects of business, from coordination to strategy, but among the most appropriate business activities that basketball most resembles is, believe it or not, data management. Perhaps more than anything, companies need to stick to their game plan when it comes to handling data: storing it, labeling it, and classifying it.

    A good data management plan could mean a winning season

    Without a plan followed by everyone in the organization, companies will soon find that their extensive collections of data are useless, just like the top talent a team manages to amass is useless without everyone on a team knowing what their role is. Failure to develop a data management plan could cost a company in time, and even in money. If data is not classified or labeled properly, search queries are likely to miss a great deal of it, skewing reports, profit and loss statements, and much more. 

    Even more worrying for companies is the need for an ability to produce data when regulators come calling. With the implementation of the European Union’s General Data Protection Regulation (GDPR), companies no longer have an option not to have a tight game plan for data management. According to GDPR rules, all EU citizens have 'the right to be forgotten', which requires companies to know what data they have about an individual, and demonstrate an ability to delete it to EU inspectors on demand. Those rules apply not just to companies in Europe, but to all companies that do business with EU residents as well. GDPR violators could be fined as much as €20 million, or 4% annual global turnover, whichever is greater.

    Even companies that have no EU clients or customers need to improve their data management game, because GDPR-style rules are moving stateside as well. California recently passed its own digital privacy law (set to go into effect in January), which gives state residents the right to be forgotten other states are considering similar laws. And with heads of large tech firms calling for privacy legislation in the U.S., it’s likely that federal legislation on the matter will be passed sooner than later.

    Data Management Teamwork, When and Where it Counts

    In basketball, players need to be molded to work together as a unit. A rogue player who decides that they want to be a 'shooting star' instead of following the playbook and passing when appropriate may make a name for themselves, but the team they are playing for is unlikely to benefit much from that kind of approach. Only when all the players work together, with each move complementing the other as prescribed by the game plan, can a team succeed.

    In data management, teams generate information that the organization can use to further its business goals. Data on sales, marketing, engagement with customers, praises and complaints, how long it takes team members to carry out and complete tasks, and a million other metrics all go into the databases and data storage systems of organizations for eventual analysis.

    With that data, companies can accomplish a great deal: Improve sales, make operations more efficient, open new markets, research new products and improve existing ones, and much more. That, of course, can only happen if all departments are able to access the data collected by everyone.

    Metadata management - A star 'player'

    Especially important is the data about data: the metadata, used to refer to data structures, labels, and types. When different departments, and even individual employees, are responsible for entering data into a repository, they need to follow the metadata 'game plan'.Tthe one where all data is being labeled according to a single standard, using common dictionaries, glossaries, and catalogs. Without that plan, data could easily get 'lost', and putting together search queries could be very difficult.

    Another problem is the fact that different departments will use different systems and products to process their data. Each data system comes with its own rules, and of course each set of rules is different. That there is no single system for labeling between the different products just contributes to the confusion, making resolution of metadata issues all the more difficult.

    Unfortunately, not everyone is always a team player when it comes to metadata. Due to pressure of time or other issues, different departments tend to use different terminology for data. For example, a department that works with Europe may label its dates in the form of year/month/day, while one that deals with American companies will use the month/day/year label. In a search form, the fields for 'years' and 'days' will not match across all data repositories, creating confusion. The department 'wins', but what about everyone else? And even in situations where the same terminology is used, the fact that different data systems are in use could impact metadata.

    Different departments have different objectives and goals, but team members cannot forget the overall objective: helping the 'team', the whole company, to win. The data they contribute is needed for those victories, those advancements. Without it, important opportunities could be lost. When data management isn’t done properly, teams may accomplish their own objectives, but the overall advancement of the company will suffer.

    'Superstars', whose objective is to aggrandize themselves, have no place on a basketball team; they should be playing one-on-one hoops with others of their type. Teams in companies should learn the lessonL if you want to succeed in basketball, or in data management, you need to work together with others, following the data plan that will ensure success for everyone.

    Author: Amnon Drori

    Source: Dataconomy

  • Master Data Management and the role of (un)structured data

    MasterDataManagementTraditional conversations about master data management’s utility have centered on determining what actually constitutes MDM, how to implement data governance with it, and the balance between IT and business involvement in the continuity of MDM efforts.

    Although these concerns will always remain apposite, MDM’s overarching value is projected to significantly expand in 2018 to directly create optimal user experiences—for customers and business end users. The crux of doing so is to globalize its use across traditional domains and business units for more comprehensive value.

    “The big revelation that customers are having is how do we tie the data across domains, because that reference of what it means from one domain to another is really important,” Stibo Systems Chief Marketing Officer Prashant Bhatia observed.

    The interconnectivity of MDM domains is invaluable not only for monetization opportunities via customer interactions, but also for streamlining internal processes across the entire organization. Oftentimes the latter facilitates the former, especially when leveraged in conjunction with contemporary opportunities related to the Internet of Things and Artificial Intelligence.

    Structured and Unstructured Data

    One of the most eminent challenges facing MDM related to its expanding utility is the incorporation of both structured and unstructured data. Fueled in part by the abundance of external data besieging the enterprise from social, mobile, and cloud sources, unstructured and semi-structured data can pose difficulties to MDM schema.

    After attending the recent National Retail Federation conference with over 30,000 attendees, Bhatia noted that one of the primary themes was, “Machine learning, blockchain, or IoT is not as important as how does a company deal with unstructured data in conjunction with structured data, and understand how they’re going to process that data for their enterprise. That’s the thing that companies—retailers, manufacturers, etc.—have to figure out.”

    Organizations can integrate these varying data types into a single MDM platform by leveraging emerging options for schema and taxonomies with global implementations, naturally aligning these varying formats together. The competitive advantage generated from doing so is virtually illimitable. 

    Original equipment manufacturers and equipment asset management companies can attain real-time, semi-structured or unstructured data about failing equipment and use that to influence their product domain with attributes informing the consequences of a specific consumer’s tire, for example. The aggregation of that semi-structured data with structured data in an enterprise-spanning MDM system can influence several domains. 

    Organizations can reference it with customer data for either preventive maintenance or discounted purchase offers. The location domain can use it to provide these services close to the customer; integrations with lifecycle management capabilities can determine what went wrong and how to correct it. “That IoT sensor provides so much data that can tie back to various domains,” Bhatia said. “The power of the MDM platform is to tie the data for domains together. The more domains that you can reference with one another, you get exponential benefits.”

    Universal Schema

    Although the preceding example pertained to the IoT, it’s worth noting that it’s applicable to virtually any data source or type. MDM’s capability to create these benefits is based on its ability to integrate different data formats on the back end. A uniformity of schema, taxonomies, and data models is desirable for doing so, especially when using MDM across the enterprise. 

    According to Franz CEO Jans Aasman, traditionally “Master Data Management just perpetuates the difficulty of talking to databases. In general, even if you make a master data schema, you still have the problem that all the data about a customer, or a patient, or a person of interest is still spread out over thousands of tables.” 

    Varying approaches can address this issue; there is growing credence around leveraging machine learning to obtain master data from various stores. Another approach is to considerably decrease the complexity of MDM schema so it’s more accessible to data designated as master data. By creating schema predicated on an exhaustive list of business-driven events, organizations can reduce the complexity of myriad database schemas (or even of conventional MDM schemas) so that their “master data schema is incredibly simple and elegant, but does not lose any data,” Aasman noted.

    Global Taxonomies

    Whether simplifying schema based on organizational events and a list of their outcomes or using AI to retrieve master data from multiple locations, the net worth of MDM is based on the business’s ability to inform the master data’s meaning and use. The foundation of what Forrester terms “business-defined views of data” is oftentimes the taxonomies predicated on business use as opposed to that of IT. Implementing taxonomies enterprise-wide is vital for the utility of multi-domain MDM (which compounds its value) since frequently, as Aasman indicated, “the same terms can have many different meanings” based on use case and department.

    The hierarchies implicit in taxonomies are infinitely utilitarian in this regard, since they enable consistency across the enterprise yet have subsets for various business domains. According to Aasman, the Financial Industry Bank Ontology can also function as a taxonomy in which, “The higher level taxonomy is global to the entire bank, but the deeper you go in a particular business you get more specific terms, but they’re all bank specific to the entire company.” 

    The ability of global taxonomies to link together meaning in different business domains is crucial to extracting value from cross-referencing the same master data for different applications or use cases. In many instances, taxonomies provide the basis for search and queries that are important for determining appropriate master data.

    Timely Action

    By expanding the scope of MDM beyond traditional domain limitations, organizations can redouble the value of master data for customers and employees. By simplifying MDM schema and broadening taxonomies across the enterprise, they increase their ability to integrate unstructured and structured data for timely action. “MDM users in a B2B or B2C market can provide a better experience for their customers if they, the retailer and manufacturer, are more aware and educated about how to help their end customers,” Bhatia said.


    Author: Jelani Harper

    Source: Information Management

  • Rubrik is data resilience leider in nieuwste Forrester report

    data science rubrik

    Rubrik is data resilience leider in nieuwste Forrester report

    In de nieuwste editie van het Forrester Wave-rapport over Data Resilience Solutions is Rubrik benoemd tot leider. De aanbieder op het gebied van multi-cloud data control kreeg zelfs de hoogste score toegekend op het gebied van strategie.

    Forrester heeft tien vendoren geëvalueerd op basis van veertig criteria, die weer zijn onderverdeeld in drie categorieën: huidige aanbod, strategie en aanwezigheid in de markt. Rubrik behaalde de hoogst mogelijke score op het gebied van strategie en security.

    'Rubrik past bij bedrijven die erop uit zijn om hun data resilience te vereenvoudigen, moderniseren en consolideren', aldus het rapport. Rubrik wordt omschreven als een ‘eenvoudige, intuïtieve en krachtige policy engine die de bescherming van data regelt, ongeacht het soort, de locatie of het doel van de data'.

    Volgens CEO Bipul Sinha van Rubrik laat de erkenning laat zien dat Rubrik goed is gepositioneerd om de transformatie van de data management-markt te leiden. 'Klanten stellen steeds hogere eisen aan data management-oplossingen, die verder gaan dan alleen back-up en recovery. Dat we de hoogste score hebben gekregen op het gebied van strategie bevestigt dat we op de juiste weg zijn om door middel van innovatie steeds beter te voldoen aan de vraag van onze klanten'.

    Bron: BI Platform

  • Why cloud solutions are the way to go when dealing with global data management

    Why cloud solutions are the way to go when dealing with global data management

    To manage geographically distributed data at scale worldwide, global organizations are turning to cloud and hybrid deployments.

    Enterprises that operate worldwide typically need to manage data both on the local level and globally across all geographies. Local business units and subsidiaries must address region-specific data standards, national regulations, accounting standards, unique customer requirements, and market drivers. At the same time, corporate headquarters must share data broadly and maintain a complete view of performance for the whole multinational enterprise.

    Furthermore, in many multinational firms, data is the business. In worldwide e-commerce, travel services, logistics, and international finance for example. So it behooves each company to have state-of-the-art data management to remain innovative and competitive. These same organizations must also govern data locally and globally to comply with many legislated regulations, privacy policies, security measures, and data standards. Hence, global businesses are facing a long list of new business and technical requirements for modern data management in multinational markets.

    For maximum business value, how do you manage and govern data that resides on multiple premises, clouds, applications, and data platforms (literally) worldwide? Global data management based on cloud and hybrid deployments is how.

    Defining global data management in the cloud

    The distinguishing characteristic of global data management is its ever-broadening scope, which has numerous drivers and consequences:

    Multiple physical premises, each with unique IT systems and data assets. Multinational firms consist of geographically dispersed departments, business units, and subsidiaries that may integrate data with clients and partners. All these entities and their applications generate and use data with varying degrees of data sharing.

    Multiple clouds and cloud-based tools or platforms. In recent years, organizations of all sizes have aggressively modernized and extended their IT portfolios of operational applications. Although on-premises applications will be with us into the foreseeable future, organizations increasingly prefer cloud-based applications, licensed and deployed on the software-as-a-service (SaaS) model. Similarly, when organizations develop their own applications (which is the preferred approach with data-driven use cases, such as data warehousing and analytics), the trend is away from on-premises computing platforms in favor of cloud-based ones from Amazon, Google, Microsoft, and others. Hybrid IT and data management environments result from the mix of systems and data that exist both on premises and in the cloud.

    Extremely diverse data with equally diverse management requirements. Data in global organizations is certainly big, but it is also diverse in terms of its schema, latencies, containers, and domains. The leading driver of data diversity is the arrival of new data sources, including SaaS applications, social media, the Internet of Things (IoT), and recently digitized business functions such as the online supply chain and marketing channels. On the one hand, data is diversifying. On the other hand, global organizations are also diversifying the use cases that demand large volumes of integrated and repurposed data, ranging from advanced analytics to real-time business management.

    Multiple platforms and tools to address diverse global data requirements. Given the diversity of data that global organizations manage, it is impossible to optimize one platform (or a short list of platforms) to meet all data requirements. Diverse data needs diverse data platforms. This is one reason global firms are leaders in adopting new computing platforms (clouds, on-premises clusters) and new data platforms (cloud DBMSs, Hadoop, NoSQL).

    The point of global data management in the cloud

    The right data is captured, stored, processed, and presented in the right way. An eclectic portfolio of data platforms and tools (managing extremely diverse data in support of diverse use cases) can lead to highly complex deployments where multiple platforms must interoperate at scale with high performance. Users embrace the complexity and succeed with it because the eclectic portfolio gives them numerous options for capturing, storing, processing, and presenting data in ways that a smaller and simpler portfolio cannot satisfy.

    Depend on the cloud to achieve the key goals of global data management. For example, global data can scale via unlimited cloud storage, which is a key data requirement for multinational firms and other very large organizations with terabyte- and petabyte-scale data assets. Similarly, clouds are known to assure high performance via elastic resource management; adopting a uniform cloud infrastructure worldwide can help create consistent performance for most users and applications across geographies. In addition, global organizations tell TDWI that they consider the cloud a 'neutral Switzerland' that sets proper expectations for shared data assets and open access. This, in turn, fosters the intraenterprise and interenterprise communication and collaboration that global organizations require for daily operations and innovation.

    Cloud has general benefits that contribute to global data management. Regardless of how global your organization is, it can benefit from the low administrative costs of a cloud platform due to the minimal system integration, capacity planning, and performance tweaking required of cloud deployments. Similarly, a cloud platform alleviates the need for capital spending, so up-front investments are not an impediment to entry. Furthermore, most public cloud providers have an established track record for security, data protection, and high availability as well as support for microservices and managed services.

    Strive to thrive, not merely survive. Let’s not forget the obvious. Where data exists, it must be managed properly in the context of specific business processes. In other words, global organizations have little choice but to step up to the scale, speed, diversity, complexity, and sophistication of global data management. Likewise, cloud is an obvious and viable platform for achieving these demanding goals. Even so, global data management should not be about merely surviving global data. It should also be about thriving as a global organization by leveraging global data for innovative use cases in analytics, operations, compliance, and communications across organizational boundaries.

    Author: Philip Russom

    Source: TDWI

  • Will the battle on data between Business and IT be ended?


    Business users have growing customer expectations, changing market dynamics, increasing competition, and evolving regulatory conditions to deal with. These factors compound the pressure on business decision makers to act now. Unfortunately, they often can’t get the data they need when they need it.

    Research shows that business managers often have to make data-driven decisions within one day. However, the time to build a single report using traditional BI methods can take six weeks or longer and a typical business intelligence deployment can take up to 18 mobusiness and ITnths.

    On the IT side, teams are feeling the pressure. They have a long list of items to do for the short run and long run. Regarding data management, IT has to try to combine data from multiple sources, ensure that data is secure and accurate, and deliver the data to the business user as requested.

    Given the need for “data now,” in relation to the bandwidth concerns placed on IT, many organizations find that their enterprise lacks the skills, technology, and support to use their corporate data to keep up with competitors, customer needs, and the marketplace.

    Adding to this existing challenge is the notion that companies are continuously adding new data sources, but each new data integration can take weeks or even months. By the time the work is complete, it’s likely that a newer, better source has already taken its place.

    Automation is a force that is driving change throughout the entire BI stack. Just look at the proliferation of self-service data visualization tools. But self-service analytics can quickly go awry without adequate governance.

    Companies that can integrate self-service BI and still maintain governance, security, and data quality will empower business users to make decisions on-demand, while relieving IT from these internal stakeholder pressures.

    Having the ability to store data in a place or a hub where it can be cleansed, reconciled, and made available as a consistent resource, on demand resource to business users can help solve the issue.

    When quality issues arise, or bad data is found, the error can be corrected once in the hub for all users – resulting in one single source of the truth. It is a place where data quality and consistency are maintained. This central repository enables the right person to have access to the right data at the right time.

    Business executives, managers, and frontline users in operations want the power to move beyond the limits of spreadsheets so that they can engage in deeper analysis by leveraging data insights to strengthen all types of decision needs. Today, newer tools and methods are making it possible for organizations to meet the demands of nontechnical users by enabling them to access, integrate, transform, and visualize data without traditional IT handholding.

    The age of self-service demands that business users have full and flexible access to their data. It also demands that business users be the ones who determine that data should be included in the system. And while business users need the expert help of IT to ensure the quality, consistency, and contextual validity of the data, business and IT can now work together more closely and more easily than ever before.

    Organizations can effectively “democratize” data by addressing the needs of nontechnical users including business executives, managers, and frontline users. This can transpire If they grant more power to those users, not just in terms of access and discovery, but also in terms of sourcing what goes into a central hub.

    In the end, giving more power to the people is one surefire way to help end the battle between business and IT.

    Author: Heine Krog Iversen

    source: Information management

EasyTagCloud v2.8