14 items tagged "data governance"

  • Achieving Business Success by Approaching Data Governance the Right Way  

    Achieving Business Success by Approaching Data Governance the Right Way

    Data analytics and AI play an increasingly pivotal role in most modern organizations. To keep those initiatives on track, enterprises must roll out Data Governance programs to ensure data integrity, compliance, and optimal business value. Data Governance has become a fundamental element of success, a key to establishing the data integrity framework in any business.

    The most successful Data Governance programs use a business-first approach, delivering quick wins and cultivating sustained success throughout the organization. Unfortunately, many organizations neglect to implement such programs until they experience a negative event that highlights the absence of good Data Governance. That could be a data breach, a breakdown in Data Quality, or a compliance action that highlights the lack of effective controls.

    Once that happens, there are several different paths a Data Governance initiative might take. A typical scenario often plays out this way: The executive team calls for implementation of a company-wide Data Governance program. The newly minted Data Governance team forges ahead, engaging business users throughout the organization and expecting that everyone will be aligned around a common purpose.

    Why So Many Data Governance Programs Fall Short

    The problem with that approach is that business users don’t always see the big picture, and roles and responsibilities may not be clearly defined. Even though executive management is fully on board, the front-line employees who work with the data every day don’t necessarily understand who should be doing what. 

    Expectations are often misaligned, as business users look to the Data Governance team to deal with minutiae. That often leads to a firefighting mentality, endless cycles of meetings, and frustration all around. Ultimately, executive management revokes its commitment, and the Data Governance program sits on the back burner until another negative incident involving data emerges.

    There is a better way. It involves a business-first approach that aligns the Data Governance program with clear goals that add value for the organization. It requires a well-defined path to success, with clear priorities that enable your Data Governance program to show tangible progress.

    A Business-First Approach to Data Governance

    The business-first approach centers around four key principles:

    • Data Governance must be clearly linked to the overarching goals of the business.
    • Data must be prioritized, focusing on the most essential elements initially.
    • Effective stakeholder engagement must happen across all levels of the organization.
    • The team must define and articulate a clear path to success that aligns with stakeholders’ definition of organizational success.

    The advantages of this business-first approach have proven out over and over again. 

    How Data Drives Your Business

    Data serves three critically important functions in most organizations. Reporting and compliance helps to shield the company from regulatory action and risk. Analytics and insights inform both strategic and tactical decisions and provide an accurate picture of how the organization is doing with respect to key performance indicators (KPIs) and process performance indicators (PPIs). Finally, data drives operational excellence by enabling automation and eliminating friction from business processes.

    The best Data Governance programs act as a support for all three of these functions. To be successful, data professionals must think about how data is going to be used to drive all of them simultaneously. In many organizations, these three functions often operate as disconnected silos, although they frequently work with the same data.

    Imagine that customer service leaders in the organization want to increase online data availability for self-service inquiries without adversely impacting risk and compliance. That could result in happier customers and fewer routine calls handled by customer service personnel. The same customer data serves the product management team, as they seek to better understand the company’s customers and their needs using advanced analytics. 

    An effective Data Governance program helps to meet these objectives. It reframes Data Governance as a supporting function that contributes to success across various business initiatives, rather than as a net-new responsibility that each department must attend to in its own way. To business users, this shifts Data Governance from the liability column to the asset column. 

    Prioritize Your Data

    Not all data is created equal. When program leaders fail to prioritize, their Data Governance programs are less likely to produce the intended results, largely because they fail to establish the value of specific data elements with respect to clearly defined business objectives.

    Engage Your Stakeholders for Success

    Naturally, it is important to engage with your colleagues throughout all levels of the organization to make your Data Governance program a success. Program leaders should communicate in value metrics that resonate with strategic, operational, and tactical stakeholders so that they understand the value of your Data Governance program in helping them to achieve their own goals and objectives. 

    By defining clear objectives and maintaining open communication with stakeholders in all three of these levels within the company, data leaders can shepherd the Data Governance program toward its first milestones of success. That will typically involve a regular cadence of meetings, newsletters, ad hoc discussions, and one-on-one interactions. This is where data leaders must develop the habit of monitoring and adjusting to ensure that adoption is proceeding at pace and the Data Governance program is delivering on its promises.

    Author: Emily Washington

    Source: Dataversity

  • BI topics to tackle when migrating to the cloud

    BI topics to tackle when migrating to the cloud

    When your organization decides to pull the trigger on a cloud migration, a lot of stuff will start happening all at once. Regardless of how long the planning process has been, once data starts being relocated, a variety of competing factors that have all been theoretical earlier become devastatingly real: frontline business users still want to be able to run analyses while the migration is happening, your data engineers are concerned with the switch from whatever database you were using before, and the development org has its own data needs. With a comprehensive, BI-focused data strategy, you and your stakeholders will know what your ideal data model should look like once all your data is moved over. This way, as you’re managing the process and trying to keep everyone happy, you end in a stronger place when your migration is over than you were at the start, and isn’t that the goal?

    BI focus and your data infrastructure

    “What does all this have to do with my data model?” you might be wondering. “And for that matter, my BI solution?”

    I’m glad you asked, internet stranger. The answer is everything. Your data infrastructure underpins your data model and powers all of your business-critical IT systems. The form it takes can have immense ramifications for your organization, your product, and the new things you want to do with it. Your data infrastructure is hooked into your BI solution via connectors, so it’ll work no matter where the data is stored. Picking the right data model, once all your data is in its new home, is the final piece that will allow you to get the most out of it with your BI solution. If you don’t have a BI solution, the perfect time to implement one is once all your data is moved over and your model is built. This should all be part of your organization’s holistic cloud strategy, with buy-in from major partners who are handling the migration.

    Picking the right database model for you

    So you’re giving your data a new home and maybe implementing a BI solution when it’s all done. Now, what database model is right for your company and your use case? There are a wide array of ways to organize data, depending on what you want to do with it.

    One of the broadest is a conceptual model, which focuses on representing the objects that matter most to the business and the relationships between them. This database model is designed principally for business users. Compare this to a physical model, which is all about the structure of the data. In this model, you’ll be dealing with tables, columns, relationships, graphs, etc. And foreign keys, which distinguish the connections between the tables.

    Now, let’s say you’re only focused on representing your data organization and architecture graphically, putting aside the physical usage or database management framework. In cases like these, a logical model could be the way to go. Examples of these types of databases include relational (dealing with data as tables or relations), network (putting data in the form of records), and hierarchical (which is a progressive tree-type structure, with each branch of the tree showing related records). These models all feature a high degree of standardization and cover all entities in the dataset and the relationships between them.

    Got a wide array of different objects and types of data to deal with? Consider an object-oriented database model, sometimes called a “hybrid model.” These models look at their contained data as a collection of reusable software pieces, all with related features. They also consolidate tables but aren’t limited to the tables, giving you freedom when dealing with lots of varied data. You can use this kind of model for multimedia items you can’t put in a relational database or to create a hypertext database to connect to another object and sort out divergent information.

    Lastly, we can’t help but mention the star schema here, which has elements arranged around a central core and looks like an asterisk. This model is great for querying informational indexes as part of a larger data pool. It’s used to dig up insights for business users, OLAP cubes, analytics apps, and ad-hoc analyses. It’s a simple, yet powerful, structure that sees a lot of usage, despite its simplicity.

    Now what?

    Whether you’re building awesome analytics into your app or empowering in-house users to get more out of your data, knowing what you’re doing with your data is key to maintaining the right models. Once you’ve picked your database, it’s time to pick your data model, with an eye towards what you want to do with it once it’s hooked into your BI solution.

    Worried about losing customers? A predictive churn model can help you get ahead of the curve by putting time and attention into relationships that are at risk of going sour. On the other side of the coin, predictive up- and cross-sell models can show you where you can get more money out of a customer and which ones are ripe to deepen your financial relationship.

    What about your marketing efforts? A customer segmentation data model can help you understand the buying behaviors of your current customers and target groups and which marketing plays are having the desired effect. Or go beyond marketing with “next-best-action models” that take into account life events, purchasing behaviors, social media, and anything else you can get your hands on so that you can figure out what’s the next action with a given target (email, ads, phone call, etc.) to have the greatest impact. And predictive analyses aren’t just for humancentric activities, manufacturing and logistics companies can take advantage of maintenance models that can let you circumvent machine breakdowns based on historical data. Don’t get caught without a vital piece of equipment again.

    Bringing it all together with BI

    Staying focused on your long-term goals is an important key to success. Whether you’re building a game-changing product or rebuilding your data model, having a well defined goal makes all the difference in the world when it comes to the success of your enterprise. If you’re already migrating your data to the cloud, then you’re at the perfect juncture to pick the right database and data models for your eventual use cases. Once these are set up, they’ll integrate seamlessly with your BI tool (and if you don’t have one yet, it’ll be the perfect time to implement one). Big moves like this represent big challenges, but also big opportunities to make lay the foundation for whatever you’re planning on building. Then you just have to build it!

    Author: Jack Cieslak

    Source: Sisense

  • BI trends: What to expect for retail in 2019?

    BI trends: What to expect in retail in 2019?

    To help retailers and brands plan for 2019, Researcher Claudia Tajima and Fiona Swerdlow are interviewing experts within Forrester for their series, ´Applying 2019 Predictions To Retail´. This week, Claudia interviewed Jennifer Belissent, Ph.D. and principal analyst on Forrester’s consumer insights team, on their 2019 BI predictions report. Here’s what Jennifer thinks retailers and brands can expect and should focus on regarding BI for the rest of 2019.

    Claudia:Your first BI prediction for 2019 states that companies cannot be successful simply selling raw data and that self-service data marketplaces will struggle. BI tools need to start delivering data insights and services. How does this shift affect retailers?

    Jennifer: For retailers today, there is a vast amount of data that you could use to improve business and better understand your customers. Many retailers already use their data to understand their customers and forecast trends. But today is a rapidly evolving landscape of new, alternative data sources. Opportunities to enrich data with new sources are appealing. However, retailers must evaluate those opportunities carefully. Why? The time to value is longer when buying raw data instead of buying data insights. For example, buying a customer’s credit score would be faster time to value than buying the raw customer data to ultimately find their credit score. My recommendation is that retailers should not rush to buy data or expect to be able to buy data from a marketplace and get all the answers they’re searching for. In some cases, retailers need insights service providers to interpret this data.

    Claudia:How will the demand for data storytelling skills impact retailers’ talent acquisition strategies?

    Jennifer: This demand exists because there is a gap between technology users and data scientists. They don’t always speak the same language, but a storyteller can bridge the gap. Organizations need a storyteller who can talk to the business team, data team, and the technology team and help them reach a common understanding. This balance is critical for BI teams to be able to both organize data and deliver the data in a compelling way. Forrester’s research suggests that more mature companies, those that are more ´insights-driven´, have these skills.

    Claudia:Organizations are predicted to abandon unactionable BI reporting and dashboards. How should retailers respond to growing derelict dashboard graveyards?

    Jennifer: Retailers are seeking answers to questions such as: How does one store compare to others? How does it compare to regional sales? However, retailer leaders’ interest in specific reports or dashboards eventually goes down over time. Creating a data center of excellence increases more data awareness, but it also brings about a frenzy of requests for new dashboards and reports. Ultimately, many of these requests end up as orphaned dashboards. It is important for retailers to be careful of how they embrace data democratization. Take time to step back and rationalize, prioritize, and determine which data from reports and dashboards you need and don’t need.

    Claudia:Why should retailers consider adopting data fabrics in place of data lakes?

    Jennifer: In the past, many organizations chose to put their data into massive data lakes. However, these organizations did not fully think through how their data lakes should be organized and used. Today, organizations are starting to realize that there is no major benefit to putting all of their data into one centralized data lake. The new trend is to create a data fabric of woven data from across different parts of the organization that sits somewhere central. Data fabric stores maintain their own individual data, but there is a central data point where it can all be accessed.

    Claudia:What recommendations would you give to retail leaders looking into investing in BI tools in the coming 12 to 18 months?

    Jennifer: Data catalogues, which serve as a knowledge repository, are becoming very popular. Organizations typically have one centralized data catalogue. Interesting data catalogue outputs include: use cases, algorithms, as well as which reports the data has been used for and where has it been tested in sales. In terms of ambient data governance tools, retailers should look for BI tools that have data governance built directly into them.

    Author: Fiona Swerdlow

    Source: Forrester

  • Conquering the 4 Key Data Integration Challenges

    Conquering the 4 Key Data Integration Challenges

    The ability to integrate data successfully into a single platform can be a challenge. Well-integrated data makes it easy for the appropriate staff to access and work with it. Poorly integrated data creates problems. Data integration can be described as the process of collecting data from a variety of sources and transforming it into a format compatible with the data storage system – typically, a database or a data warehouse. The use of integrated data, when making business decisions, has become a common practice for many organizations. Unfortunately, the data integration process can be troublesome, making it difficult to use the data when it is needed.

    Successful data integration allows researchers to develop meaningful insights and useful business intelligence.

    Integrated data creates a layer of informational connectivity that lays a base for research and analytics. Data integration maximizes the value of a business’s data, but the integration process requires the right tools and strategies. It allows a business to increase its returns, optimize its resources, and improve customer satisfaction. Data integration promotes high-quality data and useful business intelligence. 

    With the amount of data consistently growing in volume, and the variety of data formats, data integration tools (such as data pipelines) become a necessity. 

    By sharing this high-quality data across departments, organizations can streamline their processes and improve customer satisfaction. Other benefits of integrated data include:

    • Improved communication and collaboration
    • Increased data value 
    • Faster, better decisions based on accurate data
    • Increased sales and profits

    For data to be useful, it must be available for analysis, which means it must be in a readable format. 

    A Variety of Sources

    Data can be gathered from internal sources, plus a variety of external sources. The data taken from internal sources is referred to as “primary data,” and “secondary data” is often collected from outside sources, but not always. The sources of data selected can vary depending on the needs of the research, and each data storage system is unique and different.

    Secondary data is not limited to that from a different organization. It can also come from within an organization itself. Additionally, there are open data sources. 

    With the growing volume of data, the large number of data sources, and their varying formats, data integration has become a necessity for doing useful research. It has become an integral part of developing business intelligence. Some examples of data sources are listed below.

    Primary Data

    • Sensors: Recorded data from a sensor, such as a camera or thermometer
    • Survey: Answers to business and quality of service questions
    • User Input: Often used to record customer behavior (clicks, time spent)
    • Geographical Data: The location of an entity (a person or machine) using equipment at a point in time
    • Transactions: Business transactions (typically online)
    • Event Data: Recording of the data is triggered by an event (email arriving, sensor detecting motion)

    Secondary Data

    • World Bank Open Data
    • Data.gov (studies by the U.S. government)
    • NYU Libraries Research Guides (Science)

    Internal Secondary Data

    • Quickbooks (for expense management)
    • Salesforce (for customer information/sales data)
    • Quarterly sales figures
    • Emails 
    • Metadata
    • Website cookies

    Purchased, third-party data can also be a concern. Two fairly safe sources of third-party data are the Data Supermarket and Databroker. This type of data is purchased by businesses having no direct relationship with the consumers.

    Top Data Integration Challenges

    Data integration is an ongoing process that will evolve as the organization grows. Integrating data effectively is essential to improve the customer experience, or to gain a better understanding of the areas in the business that need improving. There are a number of prominent data integration problems that businesses commonly encounter:

    1. Data is not where it should be: This common problem occurs when the data is not stored in a central location. Instead, data is spread throughout the organization’s various departments. This situation promotes the risk of missing crucial information during research.

    A simple solution is to store all data in a single location (or perhaps two, the primary database and a data warehouse). Apart from personal information that is protected by law, departments must share their information, and data silos would be forbidden. 

    2. Data collection delays: Often, data must be processed in real time to provide accurate and meaningful insights. However, if data technicians must be involved to manually complete the data integration process, real-time processing is not possible. This, in turn, leads to delays in customer processing and analytics. 

    The solution to this problem is automated data integration tools. They have been developed specifically to process data in real time, prompting efficiency and customer satisfaction.

    3. Unstructured data formatting issues: A common challenge for data integration is the use of unstructured data (photos, video, audio, social media). A continuously growing amount of unstructured data is being generated and collected by businesses. Unstructured data often contains useful information that can impact business decisions. Unfortunately, unstructured data is difficult for computers to read and analyze. 

    There are new software tools that can assist in translating unstructured data (e.g., MonkeyLearn, which uses machine learning for finding patterns and Cogito, which uses natural language processing).

    4. Poor-quality data: Poor-quality data has a negative impact on research, and can promote poor decision-making. In some cases, there is an abundance of data, but huge amounts reflect “old” information that is no longer relevant, or directly conflicts current information. In other cases, duplicated data, and partially duplicated data, can provide an inaccurate representation of customer behavior. Inputting large amounts of data manually can also lead to mistakes.

    The quality of data determines how valuable an organization’s business intelligence will be. If an organization has an abundance of poor-quality data, it must be assumed there is no Data Governance program in place, or the Data Governance program is poorly designed. The solution to poor data quality is the implementation of a well-designed Data Governance program. (A first step in developing a Data Governance program is cleaning up the data. This can be done in-house with the help of data quality tools or with the more expensive solution of hiring outside help.)

    The Future of Data Integration

    Data integration methods are shifting from ETL (extract-transform-load) to automated ELT (extract-load-transform) and cloud-based data integration. Machine learning (ML) and artificial intelligence (AI) are in the early stages of development for working with data integration. 

    An ELT system loads raw data directly to a data warehouse (or a data lake), shifting the transformation process to the end of the pipeline. This allows the data to be examined before being transformed and possibly altered. This process is very efficient when processing significant amounts of data for analytics and business intelligence.

    A cloud-based data integration system helps businesses merge data from various sources, typically sending it to a cloud-based data warehouse. This integration system improves operational efficiency and supports real-time data processing. As more businesses use Software-as-a-Service, experts predict more than 90% of data-driven businesses will eventually shift to cloud-based data integration. From the cloud, integrated data can be accessed with a variety of devices.

    Using machine learning and artificial intelligence to integrate data is a recent development, and still evolving. AI- and ML-powered data integration requires less human intervention and handles semi-structured or unstructured data formats with relative ease. AI can automate the data transformation mapping process with machine learning algorithms.

    Author: Keith D. Foote

    Source: Dataversity

  • Decoding Digital Transformation Failures: Insights for IT Leaders

    Decoding Digital Transformation Failures: Insights for IT Leaders

    Aspiring transformational leaders can learn a lot from what most often derails digital success. Here’s a roundup of challenges you’ll want to anticipate and ways to help prepare your organization for the journey ahead. 

    In today’s fast-paced business world, companies are striving to harness the power of digital technologies to reinvent their operations, enhance customer experiences, drive innovation, and thereby create value for stakeholders. But the hard truth is that many digital initiatives fail to deliver results.

    Transformation efforts can be derailed for any number of reasons, but there are several common themes as to why digital initiatives fall short — and most revolve around leadership. So if you are seeking to lead transformational change at your organization, it’s worth knowing the 10 most common reasons why digital transformation fails and what you as an IT leader can learn from those failures.

    1. Lack of vision

    A common reason digital transformation fails is due to a lack of vision, which along with planning is the foundation for digital success. Without a clear understanding of what their digital transformation should achieve, it’s easy for companies to get lost in the weeds. IT leaders must work with business leadership to help establish a clear understanding of digital transformation goals and a practical roadmap for achieving them.

    2. Resistance to change

    Change is hard, and digital transformation requires a lot of it. Every step of the way provides an opportunity for employees to resist new technologies or processes, which can derail even the most well design and executed digital transformation efforts.

    Reimagination of business processes sits at the core of digital transformation, and so, by definition, digital transformation challenges the status quo, throwing we-have-always-done-it-this-way sentiment out of the window. Because of this, IT leaders must take a proactive approach to change management, communicating the benefits of digital transformation and providing support and training to employees.

    A study by McKinsey found that companies that prioritized cultural factors in digital transformations were four times more likely to succeed than those that focused on technology alone. Employee buy-in is crucial and requires involving them in the transformation process early and often.

    3. Lack of cross-functional collaboration

    Digital transformation requires strong leadership and support from all business functions. To succeed, company executives must appoint a leader who is in charge of the company’s transformation efforts and who can champion the initiative, drive all functional buy-ins, and provide guidance and support.

    While typically this leader will come from technology or digital divisions, digital transformation cannot succeed without the involvement of multiple departments and stakeholders. A lack of collaboration among these stakeholders can lead to failure. Transformational leaders must ensure that everyone is on the same page and that there is effective communication and collaboration throughout the digital transformation process. Cross-functional buy-in and collaboration will break down silos and lead to better outcomes.

    4. Poor execution

    Even the best plans can fail if execution is poor. Transformational leaders must ensure their organization has the resources and expertise to execute its digital transformation plans effectively. This may require hiring outside experts and/or investing in training and development for existing staff.

    5. Insufficient budget

    Digital transformation can be expensive, and executive leadership teams that do not allocate enough budget to the initiative may struggle to succeed. Be realistic about the costs of digital transformation and allocate sufficient human capital and financial capital to achieve your goals.

    6. Lack of talent

    Talent is the only differentiating factor an organization has. Digital transformation is about envisioning new ways of doing business, reimagining business processes, transforming business/systems architecture, and changing an organization’s culture. It requires a different mindset, as well as an agile, ready-to-experiment workforce that is change-savvy. Many organizations lack the skills and knowledge necessary for successful digital transformation. Hiring skilled personnel or providing training for existing employees is essential for successful transformation.

    7. Technology integration challenges

    Digital transformation often involves the integration of new technologies with existing systems, which can be a challenge. Transformational leaders must ensure their organizations have the expertise to integrate new technologies effectively and the follow-through to test and troubleshoot thoroughly before going live. Failure to align technology capabilities with business goals can result in a wasted investment in technology that doesn’t support business objectives.

    8. Inadequate data management and governance

    Data is at the heart of digital transformation, and companies that don’t have adequate data management processes in place are likely to struggle. Transformational leaders must ensure their organizations have the right systems and processes in place to collect, store, and analyze data effectively. Ensuring data quality, privacy, and security is essential.

    9. Short-term thinking and lack of agility

    Digital transformation is a long-term process, and a short-term mindset can derail the entire effort. Leaders need to be patient and have a long-term perspective to ensure digital transformation succeeds.

    Digital transformation requires agility and flexibility. Companies that are slow to adapt to changing circumstances or new technologies are likely to struggle. Companies should be open to change and willing to pivot as needed to ensure that their digital transformation initiatives stay on track. Digital transformation is not a one-time event but a continuous process. Failing to improve and iterate the transformation effort can result in outdated technology and processes that fail to meet evolving business needs.

    10. Overlooking customer needs

    Digital transformation is ultimately about improving the customer experience, and companies that don’t focus on their customers are likely to fail. Transformational leaders should ensure that their organizations have a deep understanding of their customers’ needs and that their digital transformation initiatives are designed with the customer in mind. 

    Doing digital right

    Digital transformation can be a challenging but rewarding process. Companies that avoid the above common pitfalls and stay focused on their goals and their customers are more likely to succeed in their digital transformation efforts, which start at the top with executives and senior leaders committing wholeheartedly to investing in new capabilities, aligning resources, and working together in new ways to achieve shared goals and foster a culture of unity. With the right leadership, vision, execution, and collaboration, digital transformation can be a powerful tool for driving growth and innovation.

    Here are five things IT leaders can do to ensure their digital transformation efforts are successful.

    1. Develop a clear strategy: A clear strategy that outlines goals and objectives, timelines, and resources required is essential for digital transformation success. Leaders must clearly define what they want to achieve through digital transformation and how they plan to do it.
    2. Foster a culture of innovation: Digital transformation requires innovation and experimentation, and thus a culture for embracing new technologies and ideas. IT leaders help facilitate a shift in organizational mindset toward a willingness to take risks and learn from failures.
    3. Invest in talent: Digital transformation requires a skilled workforce with expertise in technology, data analysis, and project management. Organizations need to invest in training and development programs to upskill their employees in key digital skills and to attract top talent.
    4. Focus on customer experience: Customer experience should be at the center of any digital transformation initiative. Leaders must understand their customers’ needs and preferences and design solutions that meet those needs.
    5. Measure and monitor progress: Digital transformation is an ongoing process, and organizations need to measure and monitor their progress to ensure they are on track to achieving their goals. This involves setting up metrics and KPIs and regularly reviewing them to identify areas for improvement.

    By following these key principles, IT leaders can help their organizations overcome the challenges of digital transformation and reap the benefits of a more agile, efficient, and customer-centric business model.

    Date: December 1, 2023

    Author: Supantha Banerjee

    Source: CIO

  • Five key aspects of data governance  

    Five key aspects of data governance

    What is data governance? Well, for one, it is a buzzword. And, with buzzwords, we often forget to slow down and investigate what they actually entail. This article is dedicated to exploring five key aspects of data governance – emphasizing the importance of implementing it from end to end.

    1. Applying an End-to-end Perspective

    Successful data governance needs to be implemented from end to end, meaning that it encapsulates your entire data landscape from data warehouse to analytics solution. It’s like any process: If it’s not governed all the way, then you cannot control the end result. On the whole, data governance is about making sure that the KPIs on which you are basing your business decisions are correct – having a process in place that ensures that secure data is delivered to end-users.

    2. Including the Analytics Solution in Your Data Governance Framework

    However, this end-to-end perspective is often overlooked, and it’s quite uncommon that analytics solutions are included in the data governance framework. Companies are generally pretty good at data governance from the data warehouse side, because they believe that, after the data leaves their data warehouse, nothing will alter that data. In reality, this is not the case, largely because of modern analytics tools that enable users to modify data directly inside the tools.

    Basically, even if you have world-class data governance for your data warehouse, it doesn’t matter. That’s why it’s important to have end-to-end data governance – you need to include your analytic and visualization tools in your governance framework as well. In fact, some analysts are now explicitly saying “Data & Analytics Governance” instead of “Data Governance.”

    3. Leveraging Automation

    If you’re relying on people to perform manual processes in order to achieve a properly governed data landscape, you will never have 100% coverage. Data governance processes need to be automated. If you manage to achieve 90% effective governance, that’s good, but you still have that 10% uncertainty looming over all your decisions. And, if you can’t trust the data, nothing else really matters.

    Additionally, because the world is changing so fast, the only way for BI tools to keep up is through leveraging automation.

    4. Thinking Big, Starting Small, Scaling Fast

    It is crucial to approach data governance step-by-step. It’s important to have a “think big, start small and scale fast” practical approach to data governance and the power of approaching it from an outside-in perspective, especially if you use self-service analytics.

    Basically, this means starting your data governance efforts with an overview of your entire data landscape, identifying which inconsistencies, objectives and errors are most important, and building your efforts from there.

    All in all, this needs to be aligned with the overarching objectives you have as an organization. Are you trying to:

    • Make it easier for self-service analytics?

    • Consolidate definitions for your KPIs?

    • Enable end-users to easily find reports or KPIs they need?

    • Solve a compliance issue that requires correct documentation?

    There are so many different objectives that you can take into consideration, and these are just some examples. The most important thing is that you initially focus your governance efforts on your main business objective. Other issues, gaps and targets will follow.

    5. Testing Your Solution to Ensure Continued Alignment to the Governance Strategy

    Despite the fact that the entire development environment considers it a default practice to test their code and its results, analytics tools have yet to adopt this. It’s not common to test analytics data. This is probably rooted in the fact that analytics is driven from the business-side — a sector that is not used to governing or testing their data processes.

    We encourage analytics users to test their entire solution so that they know that all their data is correct and is aligned with their overall governance framework. Every now and then, technical issues might arise, and it is crucial to be able to act on these proactively. Such issues are hard to spot manually but very easy to test automatically with baseline testing.

    Author: Oskar Grondahl

    Source: Qlik

  • How Machine Learning Can Improve Data Quality

    How Machine Learning Helps to Improve Data Quality

    Machine learning makes improving Data Quality easier. Data Quality refers to the accuracy of the data: High-quality data is more accurate, while low-quality data is less accurate. Accurate data/information supports good decision-making. Inaccurate data/information results in bad decision-making. 

    So, intelligent decision-making can be supported by supplying accurate information through the use of machine learning. 

    Machine learning (ML) is a subdivision of artificial intelligence (AI). However, during the late 1970s to early ’80s, AI researchers lost much of their research funding – by way of exaggerated and broken promises. The small machine learning community that had developed had the option of going out of business, or adapting machine learning to accomplish small, specific tasks for the business world. They chose the second option. 

    While the term “artificial intelligence” is often used in promoting machine learning, machine learning can also be treated as a separate industry.

    A variety of individual, successful machine learning algorithms have been used to perform several different tasks. These tasks can be broken down into three basic functions: descriptive, predictive, and prescriptive. A descriptive machine learning algorithm is used to explain what happened. A predictive ML algorithm uses data to forecast what will happen. A prescriptive ML algorithm will use data to suggest what actions should be taken.

    Automation vs. Machine Learning

    The automation used for modern computer systems can be described as a form of software that follows pre-programmed rules. It means that machines are replicating the behavior of humans to accomplish a task. For instance, invoices can be sent out using an automated process, producing them in minutes and eliminating human error. 

    Automation is the use of technology to perform tasks historically performed by humans. 

    Aside from being a component of artificial intelligence, machine learning can also be considered an evolutionary step in automation. At a very basic level, machine learning can be treated as a form of automation that can learn from its mistakes and adjust its responses to new situations. 

    The ML software is exposed to sets of data and draws certain conclusions from that data. It then applies those conclusions to similar situations. 

    How Machine Learning Works

    Machine learning uses algorithms. At its most basic level, an algorithm is a series of step-by-step instructions, similar to a baking recipe. The recipe is called a “procedure,” and the ingredients are called “inputs.” Machine learning algorithms have instructions that allow for alternative responses, while using previous experiences to select the most probable appropriate response. 

    A large number of machine learning algorithms are available for a variety of circumstances.  

    Machine learning starts training with data – text, photos, or numbers – such as business records, pictures of baked goods, data from manufacturing sensors, or repair records. Data is collected and prepared for use as training data. And the more training data, the better the resulting program.

    After selecting and collecting the training data, programmers select an appropriate ML model, provide the data, and then allow the machine learning model to train itself to find patterns in the data and make predictions. As time passes, a human programmer can tweak the model, changing its parameters to help achieve more accurate results. 

    Some data is deliberately withheld from the training process and is used later in testing and evaluating the accuracy of the ML training program. This training and testing process produces a machine learning model that can be used for specific tasks requiring flexible responses. 

    While machine learning can be remarkably useful, it is not perfect, and when it makes a mistake, it can be quite surprising. 

    Applying Machine Learning to Data Quality

    Machine learning algorithms can detect anomalies and suggest ways to improve error detection. Generally speaking, this is ideal for improving Data Quality. Listed below are some examples of the tasks machine learning algorithms perform to improve Data Quality:

    • Reconciliation: The process of comparing data from trusted sources to ensure the completeness and accuracy of migrating data. By examining user actions and historical data about how reconciliation issues were resolved previously, machine learning algorithms can use these examples for learning and, by using fuzzy logic, make the reconciliation process more efficient.
    • Missing data: ML regression models are used primarily in predictive analytics to predict trends and forecast outcomes, but can also be used to improve Data Quality by estimating the missing data within an organization’s system. ML models can identify missing records and assess missing data. These models constantly improve their accuracy as they work with more data. 
    • Data Quality rules: Machine learning can translate unstructured data into a usable format. Machine learning can examine incoming data and automatically generate rules that can proactively communicate quality concerns about that data in real time. Manual or automated rules work for known issues, however, the unknowns in data are rising with the increasing complexity of data. With more data, the ML algorithms can predict and detect the unknowns more accurately.
    • Filling in data gaps: Machine learning algorithms can fill in the small amounts of missing data when there is a relationship between the data and other recorded features, or when there is historical information available. ML can correct missing data issues by predicting the values needed to replace those missing values. Feedback from humans can, over time, help the algorithms learn the probable corrections.
    • In-house data cleansing: Manual data entry often includes incomplete addresses, incorrect spellings, etc. Machine learning algorithms can correct many common errors (which spellcheck would not correct, because this involves names and addresses) and help in standardizing the data. ML algorithms can learn to continuously use reference data to improve the data’s accuracy. (If there is no reference data, it’s possible to use recorded links to the data for backtracking purposes.)
    • Improving regulatory reporting: During regulatory reporting, incorrect records may accidentally be turned over to the regulators. Machine learning algorithms can identify and remove these records before they are sent. 
    • Creating business rules: Machine learning algorithms – such as decision tree algorithms – can use an existing business rules engine and information taken from the data warehouse to create new business rules, or improve existing business rules.

    The Risks of Poor-Quality Data

    The use of poor-quality data can damage a business and result in unnecessary expenses. Decisions based on inaccurate data can result in severe consequences. Fortunately, machine learning algorithms can catch some of these issues before they cause damage. For example, financial institutions can use machine learning to identify forged transactions. 

    Many businesses are already using machine learning as a part of their evolving Data Management strategy. The availability of off-the-shelf ML software has made access to machine learning much easier.

    Date: July 4, 2023

    Author: Keith D. Foote

    Source: Dataversity

     

  • How to create a trusted data environment in 3 essential steps

    How to create a trusted data environment in 3 essential steps

    We are in the era of the information economy. Nowadays, more than ever, companies have the capabilities to optimize their processes through the use of data and analytics. While there are endless possibilities wjen it comes to data analysis, there are still challenges with maintaining, integrating, and cleaning data to ensure that it will empower people to take decisions.

    Bottom up, top down? What is the best?

    As IT teams begin to tackle the data deluge, a question often asked is: should this problem be approached from the bottom up or top down? There is no “one-size-fits-all” answer here, but all data teams need a high-level view to help you get a quick view of your data subject areas. Think of this high-level view as a map you create to define priorities and identify problem areas for your business within the modern day data-based economy. This map will allow you to set up a phased approach to optimize your most value contributing data assets.

    The high-level view unfortunately is not enough to turn your data into valuable assets. You also need to know the details of your data.

    Getting the details from your data is where a data profile comes into play. This profile tells you what your data is from the technical perspective. The high-level view (the enterprise information model), gives you the view from the business perspective. Real business value comes from the combination of both views. A transversal, holistic view on your data assets, allowing to zoom in or zoom out. The high-level view with technical details (even without the profiling) allows to start with the most important phase in the digital transformation: Discovery of your data assets.

    Not only data integration, but data integrity

    With all the data travelling around in different types and sizes, integrating the data streams across various partners, apps and sources have become critical. But it’s more complex than ever.

    Due to the sizes and variety of data being generated, not to mention the ever-increasing speed in go to market scenarios, companies should look for technology partners that can help them achieve this integration and integrity, either on premise or in the cloud.

    Your 3 step plan to trusted data

    Step 1: Discover and cleanse your data

    A recent IDC study found that only 19% of a data professional’s time is spent analyzing information and delivering valuable business outcomes. They spend 37% of their time preparing data and 24% of their time goes to protecting data. The challenge is to overcome these obstacles by bringing clarity, transparency, and accessibility to your data assets.

    Building this discovery platform, which at the same time allows you to profile your data, to understand the quality of your data and build a confidence score to build trust with the business using the data assets, comes under the form of an auto-profiling data catalog.

    Thanks to the application of Artificial Intelligence (AI) and Machine Learning (ML) in the data catalogs, data profiling can be provided as self-service towards power users.

    Bringing transparency, understanding, and trust to the business brings out the value of the data assets.

    Step 2: Organize data you can trust and empower people

    According to the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms, 2017: “By 2020, organizations that offer users access to a curated catalog of internal and external data will realize twice the business value from analytics investments than those that do not.”

    An important phase in a successful data governance framework is establishing a single point of trust. From the technical perspective this translates to collecting all the data sets together in a single point of control. The governance aspect is the capability to assign roles and responsibilities directly in the central point of control, which allows to instantly operationalize your governance from the place the data originates.

    The organization of your data assets goes along with the business understanding of the data, transparency and provenance. The end to end view of your data lineage ensures compliance and risk mitigation.

    With the central compass in place and the roles and responsibilities assigned, it’s time to empower the people for data curation and remediation, in which an ongoing communication is from vital importance for adoption of a data driven strategy.

    Step 3: Automate your data pipelines & enable data access

    Different layers and technologies make our lives more complex. It is important to keep our data flows and streams aligned and adopt to swift and quick changes in business needs.

    The needed transitions, data quality profiling and reporting can extensively be automated.

    Start small and scale big. A part of intelligence these days can be achieved by applying AI and ML. These algorithms can take the cumbersome work out of the hands of analysts and can also be better and easier scaled. This automation gives the analysts faster understanding of the data and build better faster and more insights in a given time.

    Putting data at the center of everything, implementing automation and provisioning it through one single platform is one of the key success factors in your digital transformation and become a real data-driven organization.

    Source: Talend

  • Mastering Data Governance: A Guide for Optimal Results

    Mastering Data Governance: A Guide for Optimal Results

    With digital transformation initiatives on the rise, organizations are investing more in Data Governance, a formalized practice that connects different components and increases data’s value. Some may already have established Data Governance programs for older Data Management systems (such as for controlling master data) but may lack control in newer technologies like training an AI to generate content and make need guidance in best practices to follow.

    Steve Zagoudis, a leading authority on Data Governance, notes that a lack of awareness explains some of the disconnects in applying lessons learned from past Data Governance to newer programs. What’s more, Data Governance has a bad reputation as a drag on innovation and technological advancement because of perceived meaningless workflows. 

    To turn around these trends, companies should embrace Data Governance best practices that can adapt to new situations. Furthermore, businesses must demonstrate how these activities are relevant to the organization. Using the tactics outlined below promises to achieve these goals. 

    Lead by Doing 

    With Data Governance, actions speak louder than words, especially regarding newer projects using newer technologies. Any communications from the top-down or bottom-up need to show how Data Governance activities align with business innovations. Try having:

    • Executives lead as engaged sponsors: “Executives need to support and sponsor Data Governance wherever data is,” advises Bob Seiner. Often, a data catalog (a centralized metadata inventory) can help guide executives on where to apply Data Governance. When implementing Data Governance, managers should communicate consistently and clearly about the approach, roles, and value of Data Governance. They need to emphasize that these aspects apply to new projects too. Moreover, senior leadership needs to visibly support and allocate resources – time, money, technology, etc. – toward data stewardship, formalizing accountability and responsibility for company data and its processes. 
    • Data stewards lead through information sharing: Data stewards typically have hands-on experience with company data. Consequently, these workers are a treasure trove of knowledge valuable to their co-workers, manager, and other organizations. Not only does this information exchange help others in the company learn, but sharing also activates data stewards and keeps them highly invested in Data Governance practices. With this advantage, stewards are more likely to extend their work to newer projects.
    • All employees lead by applying a company’s Data Governance best practices: All employees take care of the Data Quality and communicate when they have questions or helpful feedback. Business leaders should provide two-way channels for stewards to encourage Data Governance adoption among their departments and allow users to express their problems or ask questions.

    Understand the “Why”

    Business requirements change quickly as companies become more data-driven. For example, the metadata requirements previously used to describe application error data and set forth by Data Governance may need a different format to train a generative AI model to suggest fixes.

    To keep Data Governance relevant, teams must create actionable use cases and connect the dots to the Data Governance’s activities. Out of this work should come a purpose statement defining success with the measurements and stories to show company project progress achieved from Data Governance.

    Data Governance purpose statements help navigate the support needs of data products, ready-to-use, high-quality data from services developed by team members. To justify updates to Data Governance processes, business leaders should present new data products as a proof of concept and explain a roadmap to get to the changes. Consider integrating a few critical Data Governance activities and how they benefit the data product in the presentation.

    By using the Data Governance purpose statement as a guide and building out solid use cases tied to data products, teams can understand the benefits of good Data Governance and the consequences of poor Data Governance. Furthermore, this messaging solidifies when it is repeated and becomes self-evident through data product usage and product maturity.

    Cover Data Governance Capabilities

    Before starting or expanding new projects, organizations must be clear about their capabilities to adapt to Data Governance activities. For example, if a software application needs to ship in three months, and three-quarters of the team must spend 90% of their time and money getting the technology running and fixing bugs, then Data Governance resources for metadata management through Data Governance will be scarce.

    To get a complete picture, organizations usually assess where their Data Governance and its best practices stand today, addressing best practices and maturity.

    Once companies have compiled feedback and metrics about their Data Governance practices, they can share recommendations with stakeholders and quickly check improvements and goals as they apply Data Governance. As resources fluctuate, business leaders might consider bringing Data Governance into project daily standups or scrum meetings to track and communicate progress.

    As project managers and engineers help one another when blocked, they can note when a data product story with Data Governance activities has been completed. In addition, adding Data Governance to daily meetings can prompt team members to bring back Data Governance components that have worked in the past – data, roles, processes, communications, metrics, and tools – and reuse them to solve current issues. 

    Implement a Well-Designed Data Governance Framework

    A well-designed Data Governance framework provides components that structure an organization’s Data Governance program. Implementing such a framework means that Data Governance assures an organization of reliable data with a good balance between accessibility and security.

    Over 60% of organizations have some Data Governance that is in the initial stages, according to the recent Trends in Data Management report. Existing Data Governance programs can take many different formats, including:

    • Command-and-Control: A top-down approach that sets the Data Governance rules and assigns employees to follow them
    • Formalized: Training programs constructed as part of an organization’s data literacy initiative to encourage Data Governance practices
    • Non-Invasive: A formalization of existing roles 
    • Adaptive: A set of Data Governance principles and definitions that can be applied flexibly and made part of business operations using a combination of styles

    The best approach works with the company culture and aligns with their data strategies, combining choices and decisions that lead to high-level goals. 

    Gather the metrics and feedback about Data Governance capabilities to understand what processes, guidelines, and roles exist and are working. Then, decide how many existing components can be used versus how much work needs to reframe the Data Governance approach. 

    For example, a command-and-control construction may allow enough flexibility in a start-up environment with two or three people; however, as a company adds more employees, Data Governance may need to be reformulated to a non-invasive or more adaptive approach. 

    Evaluate automation, such as a data catalog or Data Governance tools, regardless of the Data Governance framework chosen. Ideally, companies want automation that empowers workers in decision-making and adapts as necessary to the Data Governance purpose.

    Develop an Iterative Process

    To adapt, companies must develop an iterative process with their Data Governance components. This tactic means flexibility in adjusting goals to get to the Data Governance purpose.

    For example, a Data Governance program’s purpose ensures Data Quality – data that is fit for consumption. Initially, Data Governance members discuss critical data elements around a data model built by a team. 

    Should this task lead to unresolved disagreements after a sprint, business leaders can try shifting gears. Shelve the debate and focus on connecting terminology to shared automation tools the members use.

    Specific Data Governance processes may need updates as data moves between older and newer technologies. These cases may need new Data Governance stories for sprint planning and execution. Once an organization finds out what works over a few sprints, the team can repeat these activities and consistently communicate why and how the workflow helps.

    Conclusion

    Because business environments change rapidly, Data Governance best practices must be adaptable. Gartner has estimated that 80% of organizations will fail to scale digital business because they persist in outdated governance processes. 

    Versatile Data Governance activities require engagement from all levels of the organization and especially sponsorship from executives. Flexibility comes from understanding the purpose behind Data Governance activities and knowing Data Governance capabilities, to be able to use what works to the best extent.

    Data Governance needs implementation through a good framework that includes automation. In addition, any software tools supporting Data Governance need evaluation on how well they match the Data Governance’s purpose. 

    Data Governance best practices must work in iterations to become agile in changing business contexts. Businesses should plan on modifying the Data Governance controls used today as new technologies emerge and business environments evolve.

    Author: Michelle Knight

    Source: Dataversity

  • Navigating the Waves of Generative AI from Tech Leaders' Perspective  

    Navigating the Waves of Generative AI from Tech Leaders' Perspective

    Risky, immature and full of promise, generative AI is forcing tech leaders to assess challenges on many fronts, and in real time. Generative AI is widely regarded as one of the great technology breakthroughs of our time. On the back of thousands of headlines provoked by OpenAI’s ChatGPT, it’s provoked urgent responses from many tech giants and is the theme of, and main topic of discussion at, tech conferences worldwide. But, as with any big new wave, there is a risk of once-promising projects being washed up and there are clear and obvious concerns over governance, quality and security. To cut through the froth, CIO.com polled a range of IT leaders and experts for their views on where we are with generative AI, their hopes and their concerns.

    The state of play

    The storied UK IT chief Paul Coby, now CIO of family property developer Persimmon Homes, has seen many trends come and go but he is bullish on generative AI, even though it only made its first appearance in November 2022. “I believe generative AI is a game changer at a fundamental level,” he says. “I was at a Gartner conference in the US where they called generative AI out as the ‘third digital revolution’ after mainframe computing and the internet. The impact could really be that profound since we have a tool that can be applied to multiple use cases, from writing and designing products, to visualizations, checking code, and so forth.”

    Another experienced IT leader, David Ivell, chief product and technology officer at behavior management training company Team Teach, is already harnessing generative AI’s power.“Generative AI is a key part of our business strategy, facilitating growth with AI-enabled processes already live in production,” he says. “Since the middle of last year, we’ve been analyzing the potential impact, opportunities, and risks of the speed of innovation in this area, as well as introduced policies and implemented measures to minimize risks,” he says. “But overall, we see this as a huge opportunity. We ran workshops with every division of our business, educating them on the accelerating innovation in this area, brainstorming opportunities and risks. We’ve been shortlisting and building out potential proof-of-concept options and modelling revenue impact and already taken one concept through our innovation lab and into live production.”

    Jon Collins of technology analyst firm GigaOm and author of The Technology Garden: Cultivating Sustainable IT-Business Alignment, is both a market watcher and user. “We’re testing ChatGPT at an individual level,” he says. “The potential is highly positive and impactful, particularly as a research tool or one which gives an initial, albeit fully formed, answer. But it’s still to be seen how generative AI replaces, rather than augments, human involvement in terms of information. From a design perspective, such tools are more compelling.”

    Neil Ward-Dutton, VP, AI and Intelligent Process Automation European Practices at IDC, suggests that generative AI usage is high but business strategy may lag. “Generative AI has colossal potential to impact multiple areas of business,” he says. “An IDC survey from March 2023 saw 21% of respondents say they’re already investing in generative AI this year, and a further 55% are exploring potential use cases. In general, we see a small number of organizations using generative AI based on a strategy or plan, shaped by clear policies, and a lot of grassroots experimentation, but that’s almost always happening in a strategy vacuum.”

    What works (and what doesn’t)

    So if projects are already getting off the ground, what are feelings about where generative AI works best, and how? “The best practises are undoubtedly cross-functional collaboration, ‘try before you buy,’ and learn from what you do,” says Marc O’Brien, CIO at radiology healthcare service provider Medica Group. “In my experience, the algorithms from reputable firms do what they say on the tin but what really matters is where you position in the workflow.”

    Team Teach’s Ivell believes companies can gain a fast start by using tools being built into applications and suites. “One of the key and immediate opportunities of generative AI is it’s already being built into some tools we already use, be that Power BI, Adobe or more industry-specific apps,” he says. “To take advantage of these needs some internal discovery or analysis of these new functions, understanding how we’d use them, and, in the first instance, training our staff how to exploit the new features. People tend to use tools in the way they always have, and adoption of new features can be slow, so we need to accelerate that.”

    GigaOm’s Collins is an advocate of the always popular “start small” school of thought. “Governance has to come first, given the answers offered by generative AI solutions come with risks and caveats,” he says. “From experience, text answers can be wrong, misleading, or incomplete, and code answers can be buggy or faulty. Starting small has to be the way forward, given that success with the tooling, at least currently, is often down to the ability to create well-formed questions.”

    Ward-Dutton and IDC agree and add five other points of guidance: focusing on value and functionality, finding specific use cases, understanding limitations, considering the impact on work and jobs, and managing risks such as security, confidentiality, privacy and quality.

    Obstacles and obstructions

    Safety, bias, accuracy, and hallucination continue to be recurring issues. Jon Cosson, head of IT and CISO at wealth management firm JM Finn, recalls asking ChatGPT for his own biography. The system listed only about 70% of his CV, and simply invented a period at a well-known bank. “We need to realize where it can be enormously powerful and where it assists us, but be careful we retain human oversight,” he says. “It’s made my life easier because it allows me to write documents and make them richer, but if you rely on this beast it can bite you. We’re using it selectively in tests to see its power, but it’s heavily monitored and we won’t deploy anything if it causes any adverse decision making.”

    Medica’s O’Brien issues a caution as well. “Within healthcare the regulatory environment and the commercial frameworks are years behind the technology,” he says. “This makes it almost impossible to monetize, and, therefore, fund the implementation and usage of the algorithms. This is true across both public and independent sectors. That said, I believe once these barriers are overcome, benefits-led implementation will be swift.”

    Coby adds that the immature regulatory and legal structures around using generative AI and large language models (LLM) need to be carefully considered, as does the tendency of current programs to hallucinate. “This is why, at this stage, it’s essential that any use is checked by someone with expert knowledge. Moving from PoCs to mainstream implementation will need to be carefully controlled.”

    Ivell adds that generative AI could create unwelcome competitive dynamics. “As part of our preparation of a generative AI strategy, it’s important to understand where this technology could enable competition or startups to use it to attack our market share with new tools producing faster-to-market and lower-cost products or services,” he says. “So there’s a lot to keep aware of—not just how we may exploit it but also keeping an eye on how it’s starting to be used as a threat.”

    And in terms of intellectual property risks, IDC’s Ward-Dutton says oganizations’ own IP can leak into the public domain if they aren’t careful when using public generative AI services. “Some system providers are facing lawsuits because they trained their systems on data and content without getting permission from the original creators,” he says, adding that costs can also be prohibitive because the core technology powering today’s generative AI systems is very expensive to train.

    Searching for the sweet spots

    There are varying opinions where generative AI will make itself most felt. Collins nominates research and design: “It’s perfectly reasonable the challenges of creating a functional website from scratch should go away, as well as other areas that were already ripe for automation.” O’Brien adds it’s anything that produces content for consumption by humans, where regulation is light and pricing can fund the algorithm.

    IDC’s Ward-Dutton says the analyst’s customer panel points to three main clusters: improving customer and employee experiences; bolstering knowledge management; and accelerating software delivery. In time, he predicts, they’ll be joined by enterprise communication (including contact centres); collaboration and knowledge-sharing; content management; and design, research and creative activities.

    Despite being too early to say, Coby believes initial successes will be in enabling humans to be much more productive by using generative AI to produce first drafts and then use them as foundations. “This is likely to be in multiple areas and will require new skills in asking the right queries,” he says.

    Ivell concurs regarding areas of content, code generation, and customer support, but says he’s most excited by research opportunities. “AI can analyze large volumes of data in textual form to create new forms, summaries, and analyses of the data sets,” he says. “It can also provide analysis of large data sets to produce enterprise-level insight previously unavailable such as understanding patterns in behavior and creating insight we can use to build new products.”

    JM Finn’s Cosson, an enthusiastic blogger, says text and graphical content using tools such as Midjourney are obvious near-term opportunities. “It’s already powerful in blog sites and a lot of people will use it as a framework,” he says. “You don’t want to lose the human creative element but you can apply human oversight elements and deliver some outstanding pieces. Where you see downsides are in marketing types and copywriters losing their jobs, but there will be new jobs created.”

    A Trojan horse?

    Some watchers believe that generative AI can be the trailblazer for wider application of AI and ML. IDC’s Ward-Dutton is particularly enthusiastic. “In just a few months, generative AI has simultaneously captured the attention, imagination, and trepidation of tech and business leaders across the world,” he says. “We believe generative AI is a trigger technology that will usher in a new era of computing—the Era of AI Everywhere, which will completely change our relationship with data and how we extract value from both structured and unstructured data. The rapid adoption of generative AI moves AI from an emerging software segment in the stack to a lynch-pin technology at the center of a platform transition.”

    But CIOs are vocal about the importance of robots working in tandem with people. “AI works best when it works together with humans,” says Cosson. “The human brain is still worth something. Empathy and humanity are important and we need to work out how AI complements and fuses them together.”

    Date: August 29, 2023

    Author: Thomas Veitch

    Source: CIO

  • On-premise or cloud-based? A guide to appropriate data governance

    On-premise or cloud-based? A guide to appropriate data governance

    Data governance involves developing strategies and practices to ensure high-quality data throughout its lifecycle.

    However, besides deciding how to manage data governance, you must choose whether to apply the respective principles in an on-premise setting or the cloud.

    Here are four pointers to help:

    1. Choose on-premise when third-party misconduct is a prevalent concern

    One of the goals of data governance is to determine the best ways to keep data safe. That's why data safety comes into the picture when people choose cloud-based or on-premise solutions. If your company holds sensitive data like health information and you're worried about a third-party not abiding by your data governance policies, an on-premise solution could be right for you.

    Third-party cloud providers must abide by regulations for storing health data, but they still make mistakes. Some companies offer tools that let you determine a cloud company's level of risk and see the safeguards it has in place to prevent data breaches. You may consider using one of those to assess whether third-party misconduct is a valid concern as you strive to maintain data governance best practices.

    One thing to keep in mind is that the shortcomings of third-party companies could cause long-term damage for your company's reputation. For example, in a case where a cloud provider has a misconfigured server that allows a data breach to happen, they're to blame. But, the headlines about the incident will likely primarily feature your brand and may only mention the outside company in a passing sentence.

    If you opt for on-premise data governance, your company alone is in the spotlight if something goes wrong, but it's also possible to exert more control over all facets of data governance to promote consistency. When you need scalability, cloud-based technology typically allows you to ramp up faster, but you shouldn't do that at the expense of a possible third-party blunder.

    2. Select cloud-based data governance if you lack data governance maturity

    Implementing a data governance program is a time-consuming but worthwhile process. A data governance maturity assessment model can be useful for seeing how your company's approach to data governance stacks up to industry-wide best practices. It can also identify gaps to illuminate what has to happen for ongoing progress to occur.

    Using a data governance maturity assessment model can also signal to stakeholders that data governance is a priority within your organization. However, if your assessments show the company has a long way to go before it can adhere to best practices, cloud-based data governance could be the right choice.

    That's because the leading cloud providers have their own in-house data governance strategies in place. They shouldn't replace the ones used in-house at your company, but they could help you fill in the known gaps while improving company-wide data governance.

    3. Go with on-premise if you want ownership

    One of the things that companies often don't like about using a cloud provider for data governance is that they don't have ownership of the software. Instead, they usually enter into a leasing agreement, similarly to leasing an automobile. So, if you want complete control over the software used to manage your data, on-premise is the only possibility which allows that ownership.

    One thing to keep in mind about on-premise data governance is that you are responsible for data security. As such, you must have protocols in place to keep your software updated against the latest security threats.

    Cloud providers usually update their software more frequently than you might in an on-premise scenario. That means you have to be especially proactive about dealing with known security flaws in outdated software. Indeed, on-premise data governance has the benefit of ownership, but your organization has to be ready to accept all the responsibility that option brings.

    4. Know that specialized data governance tools are advantageous in both cases

    You've already learned a few of the pros and cons of on-premise versus cloud-based solutions to meet your data governance requirements. Don't forget that no matter which of those you choose, specialty software can help you get a handle on data access, storage, usage and more. For example, software exists to help companies manage their data lakes whether they are on the premises or in the cloud.

    Those tools can sync with third-party sources of data to allow monitoring of all the data from a single interface. Moreover, they can track metadata changes, allowing users to become more aware of data categorization strategies.

    Regardless of whether you ultimately decide it's best to manage data governance through an on-premise solution or in the cloud, take the necessary time to investigate data governance tools. They could give your company insights that are particularly useful during compliance audits or as your company starts using data in new ways.

    Evaluate the tradeoffs

    As you figure out if it's better to entrust data governance to a cloud company or handle it on-site, don't forget that each option has pros and cons.

    Cloud companies offer convenience, but only if their data governance principles align with your needs. And, if customization is one of your top concerns, on-premise data governance gives you the most flexibility to make tweaks as your company evolves.

    Studying the advantages and disadvantages of these options carefully before making a decision should allow you to get maximally informed about how to accommodate for your company's present and future needs. 

    Author: Kayla Matthews

    Source: Information-management

  • The IT Dilemma: Addressing the 12 Major Issues of Today  

    The IT Dilemma: Addressing the 12 Major Issues of Today

    Economic uncertainty, the need to continually drive business value, and shifting ways of working and leveraging IT continue to reshape the CIO agenda as priorities shift mid-year. The list of expectations on CIOs continues to lengthen, as they face pressure to seize on new technologies and drive the organization forward while simultaneously improving efficiency, dealing with staffing challenges, and facing a tech skills gap.

    Granted, each CIO will have a unique list of priorities and challenges based on enterprise objectives and its industry vertical. But the following 12 issues are ones that CIOs commonly find themselves contending with today.

    1. Hardening cybersecurity

    Most CIOs not only see cybersecurity as one of their top issues today, a majority name it as their No. 1 concern. Consider the findings from the 2023 Insight Intelligent Technology Report, an IDC InfoBrief. The survey found that cybersecurity is one of the top challenges facing organizations, with 56% of respondents saying so.

    2. Operationalizing artificial intelligence

    Harnessing the power of AI — and particularly generative AI — is also dominating the CIO’s agenda, according to multiple sources. “The No. 1 question these days is, ‘How do we leverage generative AI?’” says Thomas Phelps, vice president of corporate strategy and CIO of Laserfiche. “It’s being brought up by the executive staff, the board, at trade shows, and in the media; you can’t walk anywhere without hearing about generative AI.”

    Phelps and other tech execs say they’re focusing on how to use AI throughout their organizations to not only be more effective and efficient but to be more innovative and transformative, too. But they’re additionally tasked with doing so in a secure and ethical manner, necessitating the development of new strategies, practices, and governance policies that for many are still very much in the works.

    3. Coping with economic pressures, uncertainty

    Although the majority of CIOs have seen their IT budgets increase, they also say they’re feeling the pinch of inflation because that extra money isn’t covering their higher costs. For example, the CIO Pulse: 2023 Budgets and Priorities survey from SoftwareOne reports that 93% of the 600 IT leaders it surveyed said their budgets are up, yet 83% also say that they will have to achieve more with less. That has CIOs looking at how they can drive more efficiencies both in their own IT departments as well as throughout their organizations.

    “In this age of economic uncertainty, there are a lot of questions about whether we are as efficient as we can be and whether we really understand our costs,” says Joseph Bruhin, CIO of Breakthru Beverage Group.

    Barry Brunsman, a principal at professional services firm KPMG and leader of its Global CIO Center of Excellence, says CIOs are walking a tightrope here: They’re focusing on delivering efficiencies so their organizations are prepared if national economies tip into recession (the subject of ongoing speculation that has yet to happen in most places) but at the same time they don’t want to scale back on initiatives that could better position their organizations if economies stay steady or strengthen. “There’s kind of a paralysis around what CIOs are going to do,” he says.

    4. Modernizing at speed

    The ever-increasing pace of technology change also has CIOs’ full attention. Joel Schwalbe, CIO of biotech company Transnetyx, says he’s focused on continuously reducing technical debt and modernizing his company’s tech stack so his IT team can minimize the resources needed for maintaining the environment and instead maximize the time and energy spent on supporting business objectives.

    It’s a constant challenge, Schwalbe says. “It’s tricky, because as technology continues to evolve, you want to make sure you place your bets in the right area,” he says, adding that the goal is to stay on top of modernization and transformation efforts so “you don’t get to the point where tech debt is a problem.”

    Lenovo’s Global Study of CIOs gives some insight into this work. It found that 61% of respondents believe that “their business would feel an impact in no more than a few weeks if they halted spending on digital transformation initiatives.” Moreover, 57% said they’d replace half or more of their company’s current technology if given the chance to start from scratch and 25% said they’d replace most or all of it.

    5. Innovating meaningfully

    Of course, as CIOs know, that modernization effort can’t just be for the sake of getting new technologies. It must drive business objectives and ultimately transformation. That’s why, Phelps says, he and other CIOs continue to build “an ongoing culture of innovation” within their IT departments and their organizations as a whole. CIOs are well-positioned to take on the task, Phelps says, because they work across all the functional areas of the enterprise and they are among the best equipped to propose and deliver innovative digital services. “Everything is now digital, and CIOs have to shape that narrative. That has elevated the role of CIO to be one of a digital leader who can plug into innovative initiatives,” Phelps adds.

    Research confirms the emphasis on IT as innovator: In the May 2023 Technology Pulse Poll from professional services firm EY, 94% of surveyed tech executives said that “company-wide innovation will help them come out of the current economic downturn a stronger company than before”; 94% said their company plans to increase investments in IT or emerging technologies over the next year; and 81% said their company plans to make an innovation-related acquisition in the next six months.

    6. Ensuring IT’s value proposition

    Just as innovation must bring tangible returns to the enterprise, so too are CIOs expected to work with their business colleagues to calculate the value proposition of tech initiatives. As Bruhin explains, CIOs today must have — and instill in their partners who come requesting new technologies — a “benefits realization mentality.” “There’s a focus on the value it’s going to generate for the organization,” he adds.

    7. Driving data insights

    Speaking of value, CIOs are also looking at how to maximize all the data-related investments they’ve made in recent years. CIO’s 2023 State of the CIO survey found that 34% of IT leaders list leveraging data as a major tech initiative, putting it second on the list of priorities — just behind security and risk management. And EY’s Technology Pulse Poll found that 62% of surveyed tech execs have prioritized big data and analytics investments.

    But where past years’ investments focused on building solid data infrastructure, CIOs are now spending on technologies and training to help people throughout their organizations use data. “We’re seeing a shift in spending to technologies that drive the democratization of data and analytics — and we are expecting that spend will dramatically increase,” Brunsman says, adding that, although many organizations have data specialists generating insights from the vast troves of data they’ve accumulated, they now want to get to a place where everyone in their organization can get value out of data.

    8. Transforming to meet regulatory requirements

    CIOs are also still working with their executive peers to manage their data, says Ray Velez, global CTO with the digital consultancy Publicis Sapient. Granted, many CIOs have been working with their chief data officers, chief marketing officers and other C-suite leaders on this topic for years. But, as Velez points out, the rules and regulations around data evolve.

    As a result, Velez says he sees “a lot of focus on modernizing the customer data stack” and adopting emerging technologies that allow organizations to meet regulatory requirements such as offering a customer opt-out capability while still enabling organizations to access and use the data needed to deliver personalized customer service and insights to decision-makers. “CIOs need to be able to supply things like consent management and work with others to create and transform how customer data is used, stored and the communication of the value proposition,” Velez adds.

    9. Democratizing tech development

    CIOs are not only putting data into the hands of workers throughout their organizations; they’re increasingly putting software development tools there, too. Research firm IDC expects sales of low-code/no-code platforms to grow at a rapid clip — 13.9% annually — through 2026. Jamie Smith, CIO at the University of Phoenix, believes enabling non-IT workers to create some of their own capabilities is a “force multiplier” that benefits the entire organization by enabling workers closest to business processes and customers to create the digital experiences they want and need to get their work done.

    10. Acquiring and retaining talent

    Global business consulting firm Protiviti surveyed 1,304 C-level executives and directors to understand the top risks they face. The top of their list? The ability to attract and retain top talent in a tightening labor market, which they see as limiting their ability to achieve operational targets. Although many functional leaders experience challenges finding and keeping talent, CIOs are among the most taxed in this regard.

    “The global talent shortage has burdened the job market, and researchers expect this issue to worsen, with 85 million jobs forecasted to be unfilled by 2030 due to a lack of skilled workers,” Rafuse says. “With tech advancement accelerating by the day, it’s possible the problem gets worse before it gets better. IT departments need to be able to keep up with constant updates, new operating systems, and emerging threats, or understaffed teams risk falling behind.”

    He continues: “With so much competition for top talent and laid-off workers from Big Tech firms, companies need to be creative in how they set themselves apart. CIOs and IT leaders should also constantly explore methods to invest in the internal upskilling of their current staff in order to attract and maintain talent.”

    11. Preparing IT teams for the future

    CIOs have to think not only about their teams today, but their teams of tomorrow, too. IT workers must keep pace with evolving technologies to ensure they can deliver and support the tools and capabilities their organizations will need to be successful. Moreover, technologists want to learn new skills, with multiple studies showing that they’ll switch jobs if they feel like they’re stagnating in their current role. (In one recent survey, 47% of responding tech workers said they’re considering leaving their current job to grow their skills.)

    “Skilled IT labor is going to continue to be difficult to find in the future, so CIOs more than ever are going to have to look at opportunities for their staff to upskill,” says Matt Deneroff, vice president of technology talent solutions at staffing firm Robert Half.

    12. Creating a hybrid work environment that works for everyone

    CIOs say they themselves also must learn to work in new ways as the world shifts, particularly when it comes to supporting a hybrid workforce across their organization as well as leading their own teams in this new workplace model. “Equipping employees with the tools and support they need to do their jobs, no matter where they are, needs to be a top priority for IT teams,” Rafuse says. But at the same time Rafuse says CIOs, himself included, should consider what that means for IT staffers.

    “[The] IT manager must balance this flexibility with a good life/work balance. Just because IT teams can always be on, doesn’t mean they should if you want to keep good talent around for the long term,” he says, noting that “businesses should strive for quality over quantity by streamlining the number of technologies and software they are using thus reducing the burden on IT. This will create a more positive and functional virtual working environment for employees, while reducing cross-department friction and IT team burnout.”

    Meanwhile, Lenovo’s Global Study of CIOs showed that 66% of CIOs listed cybersecurity/ransomware and 66% listed data privacy/security as top challenges. “As technology evolves, hackers adjust their methods to new norms, making security threats a constant concern that requires IT leaders and their teams to be vigilant around the clock,” says Jeremy Rafuse, vice president of IT and head of digital workplace at software company GoTo.

    Date: August 3, 2023

    Author: Mary K. Pratt

    Source: CIO

  • Understanding and taking advantage of smart data distancing

    Understanding and taking advantage of smart data distancing

    The ongoing COVID-19 pandemic has made the term 'social distancing' a cynosure of our daily conversations. There have been guidelines issued, media campaigns run on prime time, hashtags created, and memes shared to highlight how social distancing can save lives. When you have young children talking about it, you know the message has cut across the cacophony! This might give data scientists a clue of what they can do to garner enterprise attention towards the importance of better data management.

    While many enterprises kickstart their data management projects with much fanfare, egregious data quality practices can hamper the effectiveness of these projects, leading to disastrous results. In a 2016 research study, IBM estimated that bad quality data costs the U.S. economy around $3.1 trillion dollars every year.

    And bad quality data affects the entire ecosystem; salespeople chase the wrong prospects, marketing campaigns do not reach the target segment, and delivery teams are busy cleaning up flawed projects. The good news is that it doesn’t have to be this way. The solution is 'smart data distancing'.

    What is smart data distancing?

    Smart data distancing is a crucial aspect of data danagement, more specifically, data governance for businesses to identify, create, maintain, and authenticate data assets to ensure they are devoid of data corruption or mishandling.

    The recent pandemic has forced governments and health experts to issue explicit guidelines on basic health etiquette; washing hands, using hand sanitizer, keeping social distance, etc. At times, even the most rudimentary facts need to be recapped multiple times so that they become accepted practices.

    Enterprises, too, should strongly emphasize the need for their data assets to be accountable, accurate, and consistent to reap the true benefits of data governance.

    The 7 do’s and don’ts of smart data distancing:

    1. Establish clear guidelines based on global best data management practices for the internal or external data lifecycle process. When accompanied by a good metadata management solution, which includes data profiling, classification, management, and organizing diverse enterprise data, this can vastly improve target marketing campaigns, customer service, and even new product development.

    2. Set up quarantine units for regular data cleansing or data scrubbing, matching, and standardization for all inbound and outbound data.

    3. Build centralized data asset management to optimize, refresh, and overcome data duplication issues for overall accuracy and consistency of data quality.

    4. Create data integrity standards using stringent constraint and trigger techniques. These techniques will impose restrictions against accidental damage to your data.

    5. Create periodic training programs for all data stakeholders on the right practices to gather and handle data assets and the need to maintain data accuracy and consistency. A data-driven culture will ensure the who, what, when, and where of your organization’s data and help bring transparency in complex processes.

    6. Don’t focus only on existing data that is readily available but also focus on the process of creating or capturing new and useful data. Responsive businesses create a successful data-driven culture that encompasses people, process, as well as technology.

    7. Don’t take your customer for granted. Always choose ethical data partners.

    How to navigate your way around third-party data

    The COVID-19 crisis has clearly highlighted how prevention is better than a cure. To this effect, the need to maintain safe and minimal human contact has been stressed immensely. Applying the same logic when enterprises rely on third-party data, the risks also increase manifold. Enterprises cannot ensure that a third-party data partner/vendor follows proper data quality processes and procedures.

    The questions that should keep your lights on at night are:

    • Will my third-party data partner disclose their data assessment and audit processes?
    • What are the risks involved, and how can they be best assessed, addressed, mitigated, and monitored?
    • Does my data partner have an adequate security response plan in case of a data breach?
    • Will a vendor agreement suffice in protecting my business interests?
    • Can an enterprise hold a third-party vendor accountable for data quality and data integrity lapses?  

    Smart data distancing for managing third-party data

    The third-party data risk landscape is complex. If the third-party’s data integrity is compromised, your organization stands to lose vital business data. However, here are a few steps you can take to protect your business:

    • Create a thorough information-sharing policy for protection against data leakage.
    • Streamline data dictionaries and metadata repositories to formulate a single cohesive data management policy that furthers the organization’s objectives.
    • Maintain quality of enterprise metadata to ensure its consistency across all organizational units to increase its trust value.
    • Integrate the linkage between business goals and the enterprise information running across the organization with the help of a robust metadata management system.
    • Schedule periodic training programs that emphasize the value of data integrity and its role in decision-making.

    The functional importance of a data steward in the data management and governance framework is often overlooked. The hallmark of a good data governance framework lies in how well the role of the data steward has been etched and fashioned within an organization. The data steward (or a custodian) determines the fitness levels of your data elements, the establishment of control, and the evaluation of vulnerabilities, and they remain on the frontline in managing any data breach. As a conduit between the IT and end-users, a data steward offers you a transparent overview of an organization’s critical data assets that can help you have nuanced conversations with your customers. 

    Unlock the benefits of smart data distancing

    Smart and unadulterated data is instrumental to the success of data governance. However, many enterprises often are content to just meet the bare minimum standards of compliance and regulation and tend to overlook the priority it deserves. Smart data means cleaner, high-quality data, which in turn means sharper analytics that directly translates to better decisions for better outcomes.

    Gartner says corporate data is valued at 20-25% of the enterprise value. Organizations should learn to monetize and use it wisely. Organizations can reap the benefits of the historical and current data that has been amassed over the years by harnessing and linking them to new business initiatives and projects. Data governance based on smart enterprise data will offer you the strategic competence to gain a competitive edge and improve operational efficiency.

    Conclusion 

    It is an accepted fact that an enterprise with poor data management will suffer an impact on its bottom line. Not having a properly defined data management framework can create regulatory compliance issues and impact business revenue.

    Enterprises are beginning to see the value of data in driving better outcomes and hence are rushing their efforts in setting up robust data governance initiatives. There are a lot of technology solutions and platforms available. Towards this endeavor, the first step for an enterprise is to develop a mindset of being data-driven and being receptive to a transformative culture.

     he objective is to ensure that the enterprise data serves the cross-functional business initiatives with insightful information, and for that to happen, the data needs to be accurate, meaningful, and trustworthy. Setting out to be a successful data-driven enterprise can be a daunting objective with a long transformational journey. Take a step in the right direction today with smart data distancing!

    Author: Sowmya Kandregula

    Source: Dataversity

  • What to expect for data governance in 2020?

    What to expect for data governance in 2020?

    Data governance always has been a complicated issue for most organizations. That won’t change in a big way in 2020. In fact, the increasing prevalence of technologies like artificial intelligence (AI) and machine learning (ML) may show up some of the pains even more. Don’t take that to mean that companies aren’t becoming more mature in their approach to Data Governance, though.

    AI, ML, the Internet of Things (IoT), and full process digitization will be a focus for organizations in 2020. Companies see them as required capabilities in the future and so are willing to invest in more digital innovation. 'This is expanding the governance lens and I’m seeing AI Governance becoming a reality in leading organizations', said Kelle O’Neal, founder and CEO of First San Francisco Partners. This trend shows that companies are seeing value in Data Governance so they’re extending successful practices into other areas of their business, she said.

    Organizations are realizing that AI is only successful when built upon a solid data foundation, thus driving the need for data governance, agreed Donna Burbank, managing director at Global Data Strategy:

    'I’ve had venture capital organizations approach us to train their AI startups in the foundations of data governance as a condition for investment', she said. 'I see that as an extremely positive sign pointing to the widespread need and adoption of data governance principles'.

    And yet poor data quality resulting from problems with data governance bedevils AI and ML outcomes and there’s no sign that that won’t be the case next year too.

    'Artificial intelligence and machine learning have been way oversold. Data quality gets in the way of getting good results and organizations spend way, way more time cleaning things up', said Thomas C. Redman, Ph.D., 'the Data Doc' and President of Data Quality Solutions. He estimates that more than 80% of AI and ML programs continue to fail because of this.

    Governance defined …Yet?

    One question that many companies will continue to grapple with in the new year is figuring out just what data governance is. In simple terms, said Redman, it’s a senior oversight function whose leaders advise the board or senior management about whether a data-related program is designed in the best interest of the company and is operating as designed. And as he sees it, no one is doing that yet.

    'There’s all talk about data as the most important asset, but having that oversight level would be essential if that statement were correct', he said. It’s not about plugging in various tools but about thinking of just what data governance is … and what it isn’t:

    'The term ‘governance’ is being used for everything from moving data from here to there to something about how you operate analytics. That’s not the proper use of the term'.

    Getting roles and responsibilities right is critical, he said. Data governance should be business-led and IT supported, Burbank remarked: 

    'All areas of the business need to have accountability for the data in their domain and establishing data stewardship roles is critical to ensuring accountability at all levels of the organization from strategic to tactical'.

    Chief Data Officer (CDO) roles are becoming more common, and the office of the CDO does best when it reports up through a business function like operations, strategy, or shared services, said O’Neal, or even finance if that team is influential in driving enterprise programs that result in corporate growth.

    Organizations that have matured their data governance practices will grow from a program culture to a data culture, which is one:

    'Where new employees start learning about data governance as part of their new-hire training, and data governance and management are part of the conversation at the board level', said O’Neal.

    What will data governance look like in 2020?

    It’s true that there haven’t been drastic changes in how far we’ve come with data governance over the past year, but O’Neal finds that companies are showing progress:

    'More and more companies are moving from ‘what is data governance and why should I do it,’ past creating a strategy, into not just implementation but also operationalization, where their data governance is really embedded with other project, decision-making, and ‘business as usual’ operations', she said.

    In terms of a formal, structured approach, the DAMA DMBoK is gaining wide acceptance, which is a positive step in aligning best practices, Burbank said:

    'While data governance is certainly not a ‘cookie cutter’ approach that can be simply taken from a book, the DMBOK does offer a good foundation on which organizations can build and customize to align with their own unique organizational needs and culture'.

    In 2019, Global Data Strategy supported data governance for a diverse array of sectors, including social services, education, manufacturing, insurance, building, and construction. 'It’s no longer just the traditional sectors like finance who understand the value of data', she said.

    Big value in small wins

    It’s really hard to impose Data Governance frameworks on big data at enterprise scale. It is better to start with small data first and Redman is optimistic that more companies will do so in 2020.

    'Practically everyone sees the logic in small data projects', he said. 'Suppose that only half of a hundred small data projects succeed, that’s a huge number of wins', with positive implications for cost savings and improvements in areas like customer service. And solving more of these leads to learning about what it takes to solve big data problems. 'If you build the organizational muscle you need doing small data projects you can tackle big data projects'.

    Following the classic rule of thinking big and starting small in order to have the proper data governance framework and foundation in place is what works, Burbank said. Establishing small 'quick wins' shows continual value across the organization.

    Tools to help

    2018 saw astounding growth in the data catalog market, O’Neal said. Data catalogs provide information about each piece of data, such as location of entities and data lineage. So, if you haven’t thought about that yet, it’s time to do that this year, she said.

    The good news is that the modern tools for Metadata Management and data cataloguing are much more user-friendly and approachable, according to Burbank:

    'Which is a great advancement for giving business users self-service capability and accountability for metadata and governance'.

    Redman noted that 'you can love your data governance tools, and I do too. But if you approach the problem wrong it doesn’t matter what tools you have'.

    What’s up next

    In 2020, the organizations that are able to get their own data governance in order will reach out to others in the industry to establish cross-organization data governance and data sharing agreements:

    'For example, organizations in the social services or medical arena are looking to provide cohesive support for individuals across organizations that provide the best level of service, while at the same time protecting privacy', Burbank said. 'It’s an interesting challenge, and an area of growth and opportunity in the data governance space'.

    There’s an opportunity this year for companies that are moderately mature in data governance to think about how to embed practices in business processes and decision-making structure of the organization. Places to look for embedment opportunities, O’Neal commented, are new project initiation and project management, investment approval and funding, customer creation and on-boarding, product development and launch, and vendor management/procurement.

    Expect data analytics and BI to continue to be large drivers for data governance:

    'As more organizations want to become data-driven', Burbank said, 'they are realizing that the dashboards used to drive business decision-making must be well-governed and well-understood with full data lineage, metadata definitions, and so on'.

    Author: Jennifer Zaino

    Source: Dataversity

EasyTagCloud v2.8