29 items tagged "data"

  • 4 Tips to help maximize the value of your data

    4 Tips to help maximize the value of your data

    Summer’s lease hath all too short a date.

    It always seems to pass by in the blink of an eye, and this year was no exception. Though I am excited for cooler temperatures and the prismatic colors of New England in the fall, I am sorry to see summer come to an end. The end of summer also means that kids are back in school, reunited with their friends and equipped with a bevy of new supplies for the new year. Our kids have the tools and supplies they need for success, why shouldn’t your business?

    This month’s Insights Beat focuses on additions to our team’s ever-growing body of research on new and emerging data and analytics technologies that help companies maximize the value of their data.

    Get real about real time

    Noel Yuhanna and Mike Gualtieri published a Now Tech article on translytical data platforms. Since we first introduced the term a few years ago, translytical data platforms have been a scorching hot topic in database technology. Enabling real-time insights is imperative in the age of the customer, and there are a number of vendors who can help you streamline your data management. Check out their new report for an overview of 18 key firms operating in this space, and look for a soon-to-be-published Forrester Wave™ evaluation in this space, as well.

    Don’t turn a blind eye to computer vision

    Interested in uncovering data insights from visual assets? Look no further than computer vision. While this technology has existed in one form or another for many years, development in convolutional neural networks reinvigorated computer vision R&D (and indeed established computer vision as the pseudo-progenitor of many exciting new AI technologies). Don’t turn a blind eye to computer vision just because you think it doesn’t apply to your business. Computer vision already has a proven track record for a wide variety of use cases. Kjell Carlsson published a New Tech report to help companies parse a diverse landscape of vendors and realize their (computer) vision.

    Humanize B2B with AI

    AI now touches on virtually all aspects of business. As techniques grow more and more sophisticated, so too do its use cases. Allison Snow explains how B2B insights pros can leverage emerging AI technologies to drive empathy, engagement, and emotion. Check out the full trilogy of reports and overview now. 

    Drive data literacy with data leadership

    Of course, disruptive changes to data strategy can be a hard sell, especially when your organization lacks the structural forces to advocate for new ideas. Jennifer Belissent, in a recent blog, makes the case for why data leadership is crucial to driving better data literacy. Stay tuned for her full report on data literacy coming soon. More than just

    leadership, data and analytics initiatives require investment, commitment, and an acceptance of disruption. No initiative will be perfect from the get-go, and it’s important to remember that analytics initiatives don’t usually come with a magician’s reveal.

    Author: Srividya Sridharan

    Source: Forrester

  • 5 Guidelines that keep your business' analytics app working optimally

    5 Guidelines that keep your business' analytics app working optimally

    One of the key challenges faced by organizations deploying an enterprise-wide analytics solution is the maintenance and upgrade of its applications. Most organizations follow an agile development methodology that entails frequent releases with new content as well as routine upgrades, patches, fixes, security updates etc.

    Depending on the complexity of the application, you need to invest a significant amount of time, energy, and manpower to ensure that none of the existing reports, dashboards, or underlying data is adversely impacted by any of these maintenance tasks. Any degradation in performance or accuracy of data in these applications may not only reflect poorly on the system administrators, but it may also lead to a lower level of reliability in the analytics solution and ultimately impact user adoption and business value throughout the organization negatively.

    Hence, it is critical for system administrators to ensure that the application and the data within it remains consistent and reliable for its end users, irrespective of the ongoing maintenance tasks that they have to perform on the system.

    A typical testing methodology adopted by most organizations involves manual testing and 'eye-balling' of a subset of reports and data after major maintenance tasks such as patches and updates. Organizations with more resources may create custom test scripts and automate certain parts of the testing and QA process.

    Upgrades are typically more involved and take a lot more time and testing to ensure consistency. When your analytics application grows to thousands of users and tens of thousands of reports and dashboards, it is usually cost prohibitive to test every single report for every user. Hence, automation of this testing process is critical to the long-term success of an analytics application.

    Here are five things to keep in mind when automating testing of analytics applications:

    1. Real-world applications

    Make sure that tests are run on as many real-world production applications as possible. Testing on just one or a handful of sample environments is not ideal and can lead to unforeseen issues when deploying the update or an upgrade. The applications on which tests are run need to be representative of real-world applications that your users or customers will be using.

    2. Replica of live production system:

    Ensure that there is no impact to the actual live production system at the time of testing. To run a series of tests at any time of the day, you need a replica of the production system with the same hardware and software, but in an isolated environment that is as similar to the production system as possible. This way, as your users report new issues, you can analyze them and assess their impact by running tests in a separate environment so system performance for users is not affected by the ongoing testing. Using a cloud platform makes it easier to quickly provision a replicated environment for testing purposes.

    3. Platform approach to testing

    It is really important to design the automated testing system as a platform for running a variety of tests, rather than creating disjointed automation scripts for different scenarios. The testing process also needs to incorporate changes when it fails to identify certain issues. With a single platform, you can achieve economies of scale and optimize and share the testing infrastructure across multiple scenarios and applications.

    4. Granularity of test execution data

    Test results should not be simply binary in terms of pass or fail. Irrespective of whether an application passes or fails a particular test, it is important to capture detailed statistics and information from every step of the testing process. This will help you identify and anticipate future issues and fine tune the testing process.

    5. Analysis of test results 

    Depending on the complexity of the testing process, the analysis of test results can be a full-fledged analytics application in itself. The test results should be stored in an optimized format (for example, in a data warehouse) that makes it easy to analyze in detail to gain further insights into the application performance. This will also help analyze historical system test results and monitor the performance over a period of time.


    With the ever-increasing importance of analytics and the the use of mobile devices at an all-time high, an optimally functioning analytics app can be of value for any business. These apps should be unaffected by necessary processes like updates, testing, and maintenance in order to keep working optimally.

    That's why it's crucial that your business aways keeps the guidelines mentioned above in mind. Keep improving your applications, especially the ones connected to your business' analytics solution, but never let these improvements affect the use of the app negatively!

    Author: Pradyut Bafna

    Source: MicroStrategy

  • 6 Ways to improve the intelligence of your business

    6 Ways to improve the intelligence of your business

    Business Intelligence (BI) once was a luxury used by corporations and enterprises who invested in a team of data scientists and IT specialists. Modern technology and software tools have made it possible for anyone to increase their BI value within their organization. Small and medium-sized businesses can use the same tools without investing a lot of time or money. Here are some ways to increase the value of your business intelligence.

    Spread it across your organization

    Where is Business Intelligence needed in your organization? Which part of your organization should you focus on? Is it something you should use for a single department or as a company-wide tool? These are common questions for businesses of all sizes.

    BI is mostly used by executives of enterprises and large corporations. In several others, it’s an important sales department tool. Companies who use BI aren’t getting the most out of their BI efforts. Business Intelligence is effective in any department within your organization if you allow it to be. The organizations who get the most out of their BI investments use it across their entire organization.

    Make it proactive

    Business Intelligence is just another form of data visualization. It helps you understand data and make better business decisions. It can become proactive once you understand your businesses’ triggers. Automated alerts incorporate the use of multilayer lineage that can inform you when unusual changes are taking place with your data?

    If you want to increase the value of your BI, then set up automated alerts. They can come in the form of e-mail or SMS alerts when your data hits a certain threshold for example. This turns your BI from a reactive tool into a proactive tool. Automated alerts help you address issues as they arise.

    Incorporate self-service options

    When it comes to traditional BI, users request the reports they need. Then they wait for their IT team to create those reports. Depending on the efficiency of the IT team, this entire process could take several weeks. This process can be frustrating for both the IT group and its users.

    IT’s workload is endless with the number of requests they receive in a given day. This leads to slow turnaround times and frustrated users. To increase the value of your BI investment, you need to eliminate the old reporting process. You should give users access to self-service tools, so they can access these reports immediately. This eliminates the need for bothering the IT, department who can then pay attention to more important things.

    Automate everything

    Business Intelligence depends on the data that supports it. BI that has outdated or inaccurate data is worse than having no BI. Most organizations don’t understand that they can’t replace BI software with their existing data and create reports right away. This data must be consolidated into one place and formatted specifically for the BI tool that’s used.

    Businesses turn to manual processes to meet this need. Not only does this waste time, but it leads to human errors. About 90% of spreadsheets contain data errors. To invest in your BI, you should automate your data with a data warehouse and ETL tool. This can significantly reduce errors and save time processing data.

    Automated processes can help businesses optimize customer engagement and marketing efforts. There are automated tools for content marketing,nline donations, social media management, etc. This can free up time so you can focus on effectively meeting your business goals.

    Extend BI across all your devices

    Business Intelligence needs to keep up with the demands of modern technology. It should provide the data you need across all of your devices. Many organizations still access their data on their desktop computers. Thanks to mobile technology, you can determine how or when users will need access to important data.

    They can access the data on their smartphone or access a secure dashboard on their tablet for example. Or they may have to use a desktop computer or laptop. You can never determine which device they’ll use to access this data. Thats why you should extend BI across all devices.

    If you want to raise your BI to the next level, it must be accessible everywhere. You should be able to adapt to the devices it’s used. One way to increase your Business Intelligence efforts is to offer a mobile option for users. It will allow your employees to stay informed at all times.

    Make use of external data

    Business Intelligence allows you to create and run reports, and gather your insights over internal data. It can be used to answer questions about your organization’s internal profit, productivity, revenue, and other important factors. Most businesses aren’t aware of the goldmine of information used in external sources. You have more access to data more than you know.


    How can Business Intelligence help your business achieve its goals? If you combine your external data with your internal data, you can open up to new possibilities for your entire organization. This can add more information to your existing and potential customers. It’s imperative for brick and mortar businesses to make use of this information to increase marketing opportunities and to expand their customer base.

    Author: Philip Piletic

    Source: SmartDataCollective

  • 7 Personality assets required to be successful in data and tech

    7 Personality assets required to be successful in data and tech

    If you look at many of the best-known visionaries, such as Richard Branson, Elon Musk, and Steve Jobs, there are certain traits that they all have which are necessary for being successful. So this got me thinking, what are the characteristics necessary for success in the tech industry? In this blog, I’m going to explain the seven personality traits that I decided are necessary for success, starting with:

    1. Analytical capabilities

    Technology is extremely complex. If you want to be successful, you should be able to cope with complexity. Complexity not only from technical questions, but also when it comes to applying technology in an efficient and productive way.

    2. Educational foundation

    Part of the point above is your educational foundation. I am not talking so much about specific technical expertise learned at school or university, but more the general basis for understanding certain theories and relations. The ability to learn and process new information very quickly is also important. We all know that we have to learn new things continuously.

    3. Passion

    One of the most important things in the secret sauce for success is being passionate about what you do. Passion is the key driver of human activity, and if you love what you’re doing, you’ll be able to move mountains and conquer the world. If you are not passionate about what you are doing, you are doing the wrong thing.

    4. Creativity

    People often believe that if you are just analytical and smart, you’ll automatically find a good solution. But in the world of technology, there is no one single, optimal, rational solution in most cases. Creating technology is a type of art, where you have to look for creative solutions, rather than having a genius idea. History teaches us that the best inventions are born out of creativity.

    5. Curiosity

    The best technology leaders never stop being curious like children. Preserving an open mind, challenging everything and keeping your curiosity for new stuff will facilitate your personal success in a constantly changing world.

    6. Persistence

    If you are passionate, smart and creative and find yourself digging deeply into a technological problem, then you’ll definitively need persistence. Keep being persistent to analyze your problem appropriately, to find your solution, and eventually to convince others to use it.

    7. Being a networker and team player

    If you have all the other skills, you might already be successful. But, the most important booster of your success is your personal skillset. Being a good networker and team player, and having the right people in your network to turn to for support, will make the whole journey factors easier. There might be successful mavericks, but the most successful people in technology have a great set of soft skills.

    As you’ll notice, these characteristics aren’t traits that you are necessarily born with. For those who find that these characteristics don’t come naturally to them, you’ll be pleased to hear that all can be learned and adopted through hard work and practice. Anyone can be successful in tech, and by keeping these traits in mind in future, you too can ensure a long and successful career in tech.

    Author: Mathias Golombek

    Source: Dataversity


  • Aligning your business with your data team

    Aligning your business with your data team

    It’s important for everyone at a company to have the data they need to make decisions. However, if they just work with their data team to retrieve specific metrics, they are missing out. Data teams can provide a lot more insights at a faster rate, but you will need to know how to work with them to make sure that everyone is set up for success. 

    Data teams can be thought of as experts at finding answers in data, but it’s important to understand how they do that. In order to get the most value out of your collaboration, you need to help them understand the questions that matter to you and your team and why those questions need to be answered. There are a lot of assumptions that get built into any analysis, so the more the data team knows about what you are looking for, the more knowledge they may find as they explore data to produce their analysis. Here are four tips to make more productive requests from members of your data team: 

    Approach data with an open mind

    It’s important to treat the data request process as an open-ended investigation, not a way to find data that proves a point. A lot of unexpected insights can be found along the way. Make your goal to ask questions and let your data team search for the answers. This approach will allow you to get the best insights, the type of unknowns that could change your decision for the better. If you put limitations on what you’re asking the data, you’ll end up putting limitations on the insights you can get out of your inquiry. 

    To really dig into this, think about how questions are answered scientifically. Scientists treat any bias as an opportunity for the insight to be compromised. For example, let’s say you are looking to improve customer satisfaction with your product. Requesting a list of customers with the highest and lowest NPS scores will give you a list of people who are happiest or most frustrated, but it is not going to let you know how to improve customer satisfaction. This request puts too much attention on the outliers in your customer base rather than identifying the key pain points. That’s part of the picture, but not all of it. If you’re trying to create a program that targets your goal, let your data team know the goal, give them a few optional starting points, and see what they come back with. They might surprise you with some sophisticated analysis that provides more insight and helps you launch a fantastic program. 

    Start with a conversation, not a checklist

    The single biggest mistake a line-of-business professional can make when requesting data is to present a data expert with a list of KPIs and tell the data team to just fill in the blanks. This approach misses so much of the value a data team can provide. Modern data teams have technology and abilities that allow them to go much further than just calculating numbers. They can guide analytical exploration through flexible, powerful tools to make sure you’re getting the most valuable insights out of your data.

    Instead of a list of metrics, think about starting your data request as a meeting. You can provide the business context needed and a list of questions that you want answered. You can even present some initial hypotheses about what those numbers may look like and why they might move in one direction or another. This is a great way to kick off the conversation with your data counterpart. From there, you can benefit from their experience with data to start generating new and more informed questions from their initial inquiries. The data team’s job is to get you information that helps you be more informed, so give them as much context as possible and let them work as a problem solver to find data-driven recommendations.

    Data should recommend actions, not just build KPIs reports

    A lot of standard business KPIs measure the results of company efforts: revenue, lead conversion, user count, NPS, etc. These are important statistics to measure, but the people tracking them should be very clear that these numbers track how the company is moving, not why it is moving that way. To make these data points actionable, you need to take analysis further. Knowing that your NPS is going up or down is useless if it doesn’t inform a customer team about the next step to take. 

    A good data team will map important KPIs to other data and find connections. They’ll comb through information to find the levers that are impacting those important KPIs the most, then make recommendations about how to achieve your goals. When you get a list of levers, make sure to understand the assumptions behind the recommendations and then take the right actions. You can always go back to those KPI reports to test if the levers are having the intended effect.

    Data requests are iterative, give the data person feedback

    Communication about data should not end when the data has been delivered to you. It’s important to dig into the analysis and see what you can learn. Instead of reporting that data or taking action on it right away, you should check with your dashboard creator to make sure that he or she can verify that you’re reading all of the data properly and that the next steps are clear. There are a lot of ways to misinterpret data, a good way to prevent mistakes is to continue communicating.

    Even if you’ve gotten the right takeaways from the data, it’s still good to consult with your dashboard creator and go over your interpretation of the information so they know how you read data. You may need a follow-up meeting to restart with the overall question you want to answer, then see what additional data needs to be collected or what modifications are needed to make the report or dashboard work best for your intended use-case.

    Author: Christine Quan

    Source: Sisense

  • Chinachem: successful use of SAP software in the Hong Kong property market

    Chinachem: successful use of SAP software in the Hong Kong property market

    According to a January story in the South China Morning Post, Hong Kong has topped the table as the world’s most expensive housing market for the 9th straight year. That sounds like good news for property developers in that area. But, according to the same story, prices of Hong Kong homes also decreased with 7.2% in the last four months.

    What the news really shows is that the property market can be volatile. Combined with long construction times running into multiple years and billion dollar capital investments, that makes property development an extremely challenging industry.

    Few of Hong Kong’s developers are more aware of that than the Chinachem Group. While Chinachem began its life in agricultural projects and chemicals, Chinachem has developed its presence as one of Hong Kong’s most famous property companies over the year. Tthrough prosperous times and through tough times. Recently Chinachem was able to win a big land parcel in one of Hong Kong’s upmarket suburbs after surveyors cut their valuation by 10 per cent, another sign of softening property prices.

    However, in an industry that is often very traditional in its execution, it is not just prices that are putting property businesses under increasing competitive pressure. The digital explosion is also having a huge effect. As Chinachem’s Executive Director and Chief Executive Officer, Donald Choi, points out: technology is changing how companies in every industry are organized and run. And Chinachem isn’t any different.

    Changing times

    Hong Kong has been lucky in a way, especially in the property market, which has been a long-term growth market. But complacency can be a killer.

    Chinachem’s Head of Group IT, Joseph Cagliarini, believes that the lesson to be learned from a truly global brand like Kodak, which went bankrupt because the world changed from film to digital photography, cannot be overlooked. Instead, he calls for a relentless pursuit of technology to make sure Chinachem is not only listening to its customers, but able to respond appropriately.

    Different companies are at different stages of embracing digital transformation and technology. Anticipating what is required and strategizing change, Chinachem has turned its eyes to a new generation of milestones, and embarked on a journey to become an intelligent business.

    For the long-time property developer, that change starts with (real-time) data. Like many companies, Chinachem didn’t have a central view of its operations. So, all of its business units operated autonomously to some extent. That created a mountain of manual processes, and many separate systems containing valuable information.

    In October 2018, Chinachem selected a comprehensive suite of SAP software and cloud solutions to drive operational efficiency across finance and HR operations for its corporate group and hotels in order to help drive long-term efficiencies and growth. SAP is also providing Chinachem to help drive rapid innovation and increase the strategic value of human resources.

    Once the solutions are fully implemented, Chinachem will enjoy a variety of benefits, including real-time updates on financial performance that will optimize their finance processes. This includes everything from planning and analysis to period end close and treasury management.

    Long-term plans

    Thanks to other key features the group’s long-term objectives, such as enhancing financial planning and analysis, accelerating planning cycles, increasing profitability, and making finance functions more efficient are also supported. Chinachem is now able to accelerate the building and deployment of apps and extensions that engage employees in new ways. This will allow HR to be flexible and innovative without compromising the organization’s core HR process.

    In addition, Chinachem’s hotels can personalize their end-to-end HR landscape, creating an outstanding, seamless and secure user experience. The group can also leverage data from SAP solutions to make insightful business decisions that will have lasting impact.

    Customers are still king

    Chinachem’s journey also involves adapting to changing customers who now live on multiple platforms, both online and offline.

    With the right technology and software, Chinachem will be able to monitor customer behavior and, therefore respond to their needs without actually being asked. Executive Director of Chinachem, Donald Choi, believes that advanced data analytics could be the key to this. Not to replace offline experiences, but to be at all the right places at the right time.

    In an ever-changing and increasingly digital world, a comprehensive suite of SAP software and cloud solutions may not be the final answer for all of Chinachem’s needs. However, as Donald Choi says, “it is a good starting point for this journey.”

    Author: Terence Leung

    Source: SAP

  • Data as a universal language

    Data as a universal language

    You don’t have to look very far to recognize the importance of data analytics in our world; from the weather channel using historical weather patterns to predict the summer, to a professional baseball team using on-base plus slugging percentage to determine who is more deserving of playing time, to Disney using films’ historical box office data to nail down the release date of its next Star Wars film.

    Data shapes our daily interactions with everything, from the restaurants we eat at, to the media we watch and the things that we buy. Data defines how businesses engage with their customers, using website visits, store visits, mobile check-ins and more to create a customer profile that allows them to tailor their future interactions with you. Data enhances how we watch sports, such as the world cup where broadcasters share data about players’ top speed and how many miles they run during the match. Data is also captured to remind us how much time we are wasting on our mobile devices, playing online games or mindlessly scrolling through Instagram.

    The demand for data and the ability to analyze it has also created an entire new course of study at universities around the world, as well as a career path that is currently among the fastest growing and most sought-after skillsets. While data scientists are fairly common and chief data officer is one of the newest executive roles focused on data-related roles and responsibilities, data analytics no longer has to be exclusive to specialty roles or the overburdened IT department. 

    And really, what professional can’t benefit from actionable intelligence?

    Businesses with operations across the country or around the world benefit from the ability to access and analyze a common language that drives better decision making.  An increasing number of these businesses recognize that they are creating volumes of data that have value, and even more important perhaps, the need for a centralized collection system for the information so they use the data to be more efficient and improve their chances for success.

    Sales teams, regardless of their location, can use centrally aggregated customer data to track purchasing behavior, develop pricing strategies to increase loyalty, and identify what products are purchased most frequently in order to offer complementary solutions to displace competitors.

    Marketing teams can use the same sales data to develop focused campaigns that are based on real experiences with their customers, while monitoring their effectiveness in order to make needed adjustments and or improve future engagement.

    Inventory and purchasing can use the sales data to improve purchasing decisions, ensure inventory is at appropriate levels and better manage slow moving and dead stock to reduce the financial impact on the bottom line.

    Branch managers can use the same data to focus on their own piece of the business, growing loyalty among their core customers and tracking their sales peoples’ performance.

    Accounts receivables can use the data to focus their efforts on the customers that need the most attention in terms of collecting outstanding invoices. And integrating the financial data with operational data paints a more complete picture of performance for financial teams and executives responsible for reporting and keeping track of the bottom line.

    Data ties all of the disciplines and departments together regardless of their locations. While some may care more about product SKUs than P&L statements or on-time-in-full deliveries, they can all benefit from a single source of truth that turns raw data into visual, easy-to-read charts, graphs and tables.

    The pace, competition and globalization of business make it critical for your company to use data to your advantage, which means moving away from gut feel or legacy habits to basing key decisions on the facts found in your ERP, CRM, HR, marketing and accounting systems. With the right translator, or data analytics software, the ability to use your data based on roles and responsibilities to improve sales and marketing strategies, customer relationships, stock and inventory management, financial planning and your corporate performance, can be available to all within your organization, making data a true universal language.

    Source: Phocas Software

  • Data management: building the bridge between IT and business

    Data management: building the bridge between IT and business

    We all know businesses are trying to do more with their data, but inaccuracy and general data management issues are getting in the way. For most businesses, the status quo for managing data is not always working. However, tnew research shows that data is moving from a knee-jerk, “must be IT’s issue” conversation, to a “how can the business better leverage this rich (and expensive) data resource we have at our fingertips” conversation.

    The emphasis is on “conversation”, business and IT need to communicate in the new age of Artificial Intelligence, Machine Learning and Interactive Analytics. Roles and responsibilities are blurring, and it is expected that a company’s data will quickly turn from a cost-center of IT infrastructure to a revenue-generator for the business. In order to address the issues of control and poor data quality, there needs to be an ever-increasing bridge between IT and the business. This bridge has two component parts. The first one is technology, which is both sophisticated enough to handle complex data issues but easy enough to provide a quick time-to-value. The second one is people who are able to bridge the gap between IT systems/storage/access items and business users need for value and results (enter data analysts and data engineers).

    This bridge needs to be built with three key components in mind:

    • Customer experience:

      For any B2C company, customer experience is the number one hot topic of the day and a primary way they are leveraging data. A new 2019 data management benchmark report found that 98% of companies use data to improve customer experience. And for good reason, between social media, digital streaming services, online retailers and others, companies are looking to show the consumer that they aren’t just a corporation, but that they are the corporation most worthy of building a relationship with. This invariably involves creating a single view of the customer (SVC), and  that view needs to be built around context and based on the needs of the specific department within the business (accounts payable, marketing, customer service, etc.).
    • Trust in data:

      Possessing data and trusting data are two completely different things. Lots of companies have lots of data, but that doesn’t mean they automatically trust it enough to make business-critical decisions with it. Research finds that on average, organizations suspect 29% of current customer/prospect data is inaccurate in some way. In addition, 95% of organizations see impacts in their organization from poor quality data. A lack of trust in the data available to business users paralyzes decisions, and even worse, impacts the ability to make the right decisions based on faulty assumptions. How often have you received a report and questioned the results? More than you’d like to admit, probably. To get around this hurdle, organizations need to drive culture change around data quality strategies and methodologies. Only by completing a full assessment of data, developing a strategy to address the existing and ongoing issues, and implementing a methodology to execute on that strategy, will companies be able to turn the corner from data suspicion to data trust.
    • Changing data ownership:

      The responsibilities between IT and the business are blurring. 70% of businesses say that not having direct control over data impacts their ability to meet strategic objectives. The reality is that the definitions of control are throwing people off. IT thinks of control as storage, systems, and security. The business thinks of control as access, actionable and accurate. The role of the CDO is helping to bridge this gap, bringing the nuts and bolts of IT in line with the visions and aspirations of the business.

    The bottom line is that for most companies data is still a shifting sea of storage, software stacks, and stakeholders. The stakeholders are key, both from IT and the business, and in how the two can combine to provide the oxygen the business needs to survive: better customer experience, more personalization, and an ongoing trust in the data they administrate to make the best decisions to grow their companies and delight their customers.

    Author: Kevin McCarthy

    Source: Dataversity

  • DataOps and the path from raw to analytics-ready data

    DataOps and the path from raw to analytics-ready data

    For the first time in human history, we have access to the second-by-second creation of vast quantities of information from nearly every activity of human life. It’s a tectonic shift that’s transforming human society. And among the myriad impacts is an important one for every business: the shift in data users’ expectations. In the same way that the advent of smartphones triggered expectations of access and convenience, the explosion in data volume is now creating expectations of availability, speed, and readiness. The scalability of the internet of things (IoT), AI in the data center, and software-embedded machine learning are together generating an ever-growing demand in the enterprise for immediate, trusted, analytics-ready data from every source possible.

    It makes complete sense, since there’s a direct correlation between your business’s ability to deliver analytics-ready data and your potential to grow your business. But as every data manager knows, yesterday’s infrastructure wasn’t built to deliver on today’s demands. Traditional data pipelines using batch and extended cycles are not up to the task. Neither are the legacy processes and lack of coordination that grew out of the siloed way we’ve traditionally set up our organizations, where data scientists and analysts are separate from line-of-business teams.

    As a result, enterprises everywhere are suffering from a data bottleneck. You know there’s tremendous value in raw data, waiting to be tapped. And you understand that in today’s data-driven era, success and growth depend on your ability to leverage it for outcomes. But the integration challenges presented by multi-cloud architecture put you in a difficult position. How can you manage the vast influx of data into a streamlined, trusted, available state, in enough time to act? How can you go from raw to ready for all users, in every business area, to uncover insights when they’re most impactful? And perhaps most importantly, how can you make sure that your competitors don’t figure it all out first?

    The raw-to-ready data supply chain

    There’s good news for everyone struggling with this issue.

    First, the technology is finally here. Todays’ data integration solutions have the power to collect and interpret multiple data sets; eliminate information silos; democratize data access; and provide a consistent view of governed, real-time data to every user across the business. At the same time, the industry trend of consolidating data management and analytics functions into streamlined, end-to-end platforms is making it possible for businesses to advance the speed and the accuracy of data delivery. And that, in turn, is advancing the speed and accuracy of insights that can lead to new revenue creation.

    And second, we’re seeing the emergence of DataOps, a powerful new discipline that brings together people, processes, and technologies to optimize data pipelines for meeting today’s considerable demands. Through a combination of agile development methodology, rapid responses to user feedback, and continuous data integration, DataOps makes the data supply chain faster, more efficient, more reliable, and more flexible. As a result, modern data and analytics initiatives become truly scalable, and businesses can take even greater advantage of the data revolution to pull ahead.

    What is DataOps for analytics?

    Like DevOps before it, which ignited a faster-leaner-more-agile revolution in app development, DataOps accelerates the entire ingestion-to-insight analytics value chain. Also like DevOps, DataOps is neither a product nor a platform; it’s a methodology that encompasses the adoption of modern technologies, the processes that bring the data from its raw to ready state, and the teams that work with and use data.

    By using real-time integration technologies like change data capture and streaming data pipelines, DataOps disrupts how data is made available across the enterprise. Instead of relying on the stutter of batch orientation, it moves data in a real-time flow for shorter cycles. Additionally, DataOps introduces new processes for streamlining the interaction among data owners, database administrators, data engineers, and data consumers. In fact, DataOps ignites a collaboration mentality (and a big cultural change) among every role that touches data, ultimately permeating the entire organization.

    What does DataOps look like from a data-user perspective?

    In a subsequent post, I’ll delve more granularly into the technical and procedural components of DataOps for Analytics, looking at it from an operational perspective. For this post, where I want to highlight the business impact, I’ll start with a quick overview of what DataOps looks like from a data-user perspective.

    • All data, trusted, in one simplified view: Every data-user in the enterprise has 24/7 access to the data (and combinations of data) they need, in an intuitive and centralized marketplace experience. Analysts of every skill level can load, access, prepare, and analyze data in minutes without ever having to contact IT.
    • Ease of collaboration: It becomes faster and easier for data scientists and business analysts to connect and collaborate, and crowd-sourcing of key information. For example, the identification and surfacing of the most popular and reliable data sets becomes possible.
    • Reliability and accuracy: Because the data is governed and continuously updated, with all users drawing from the same data catalogue, trust is high, teams are aligned, and insights are reliable.
    • Automation: Users are freed to ask deeper questions sooner, thanks to the automation of key repeatable requests. And with AI-enabled technologies that suggest the best visualization options for a given data set, chart creation is faster and easier, too. Other AI technologies point users toward potential new insights to explore, prompting them to reach relevant and previously undiscovered insights.
    • Ease of reuse: Data sets do not have to be generated again and again, for every application, but rather can be reused as needs arise and relevance expands – from planning and strategy to forecasting and identifying future opportunities in an existing client base.
    • Increased data literacy: DataOps fosters the easiest kind of data literacy boost by automating, streamlining, and simplifying data delivery. Regardless of existing skill levels, every member of your team will find it much more intuitive to work with data that’s readily available and trusted. At the same time, DataOps buttresses the more active efforts of skills training by delivering reliable data in real time. Getting the right data to the right people at the right time keeps even the most advanced analysts moving forward in new directions.

     What are the business outcomes?

    In every era, speed has given businesses a competitive advantage. In the data-driven era, where consumers expect real-time experiences and where business advantage can be measured in fractions of a second, speed has become more valuable than ever. One of the fundamental advantages of DataOps for Analytics is the speed of quality data delivery. The faster you can get data from raw to ready (ready for analysis, monetization, and productization), the faster you can reap all the benefits data promises to deliver.

    But speed is just the beginning. By delivering governed, reliable, analytics-ready data from a vast array of sources to every user in the enterprise, the raw-to-ready data supply chain becomes an elegant lever for business transformation and growth. Here are four key areas where DataOps galvanizes transformation:

    1. Customer intelligence: With an agile data supply chain, you can much more efficiently use analytics to improve customer experience and drive increased lifetime value. Discover deeper customer insights faster, and use them to customize interactions; increase conversion; and build long-term, one-to-one customer relationships by offering personalized experiences at scale.
    2. Reimagined processes: Accelerating, streamlining, and automating your data pipelines enables teams across your organization to more quickly and effectively optimize every aspect of business for efficiency and productivity. This includes automating processes, reducing costs, optimizing the overall supply chain, freeing up scarce resources, improving field operations, and boosting performance.
    3. Balanced risk and reward: Nimble data-delivery empowers analytics users to get timely insight into internal and external factors to make faster, smarter decisions around risk. Leaders can manage production; keep data current, consistent, and in the right hands; and stay compliant while preparing for the future.
    4. New business opportunities: And finally, a raw-to-ready data supply chain gives you the power to develop new products, services, and revenue streams with insights gleaned from data and/or to monetize the data itself. This may be the most exciting opportunity we’re seeing with DataOps for Analytics today; it’s certainly the most transformative. For example, consider how storied American conglomerate GE has transformed a century-old business model (selling hardware) to create a digital platform for commodifying their data. And think about how tech behemoths like Amazon and Google have used their massive stores of data and agile analytics capabilities to attack and disrupt traditional markets like insurance, banking and retail.

    The heart of digital transformation

    If you’re launching or underway with strategic digital transformation programs for competitive viability and if you’re a CIO or CDO, data is the key. To thrive, your initiatives need an agile, integrated data and analytics ecosystem that provides a raw-to-ready data supply chain, accelerates time-to-insight, and enables a rapid test-and-learn cycle. That’s DataOps for Analytics, and it’s the dawn of a new era in the evolution of the data-driven organization.

    Author: Mike Capone

    Source: Qlik

  • De data transformatie van de grootste bierbrouwer ter wereld in beeld

    De data transformatie van de grootste bierbrouwer ter wereld in beeld

    's Werelds grootste brouwer van bier, AB InBev, had een groot probleem in het gebruik van data: het was overal maar niemand wist het te gebruiken. Dat is nu wel anders, na een belangrijke transformatie van de organisatie.

    Anheuser-Busch InBev, de brouwer van Budweiser, Corona en meer dan 500 andere biermerken, had een dataprobleem toen Harinder Singh in 2017 bij het bedrijf kwam als global director of data strategy and solution architecture.

    Mede dankzij de overnames van meer dan een dozijn biermerken in de afgelopen jaren, had AB InBev in meer dan 100 landen een overvloed aan gegevens verzameld die opgeslagen waren in on-premises en cloud systemen. Singh's doelstelling? De gegevens samenvoegen, verenigen en beschikbaar maken voor zakelijke gebruikers via één enkele 'lens'.

    'Mijn collega's hier vertellen me dat het denken aan technologie of data drie jaar geleden nog niet helemaal top of mind was', zegt Singh, die een gelijkaardige rol vervulde bij Walmart eCommerce voordat hij bij AB InBev ging werken. 'Bedrijfstransformatie moet mogelijk worden gemaakt door digitale transformatie en data is de kern'.

    Gegevens uit de pomp

    In het behandelen van 'zakelijke data als de nieuwe olie', zijn bedrijven bereid grof geld te betalen voor software die data kan opschonen en organiseren voor het verwerven van zakelijke inzichten, waardoor wereldwijd de inkomsten voor big data en business analytics software naar verwachting zo'n 260 miljard dollar zullen bereiken in 2022, volgens marktonderzoek van IDC.

    Maar als data de nieuwe olie is, is de integratie ervan het equivalent van het uit de grond halen, het in een digitaal vat stoppen en het klaar maken voor consumptie. Het probleem? Gegevens worden steeds meer versnipperd over organisaties, vooral omdat legacy-applicaties worden vervangen door nieuwe, losjes gekoppelde applicaties.

    De gegevens van AB InBev werden opgeslagen in meer dan 100 bronsystemen, 15 SAP-systemen en 27 ERP-systemen. Ook vertrouwde het bedrijf op 23 afzonderlijke ETL-tools (extractie, transformatie en laden) om gegevens van de ene database naar de andere te verplaatsen.

    Deze aanpak maakte het moeilijk om een eenduidig beeld te krijgen van de gegevens, zegt Singh. En nu GDPR (General Data Protection Regulation) van kracht is, moet AB InBev wereldwijd zicht hebben op haar gegevens, waarvan sommige over consumenten gaan en dus onderworpen zijn aan meerdere privacywetten.

    'We moeten die gegevens nog steeds standaardiseren en integreren, een ander aspect van onze data-uitdaging', zegt Singh.

    Singh heeft een maand lang een proof-of-concept uitgevoerd met Talend, een leverancier van data-integratiesoftware in de cloud, alvorens de leverancier te selecteren. AB InBev gebruikt Talend, in en cloud-gebaseerde ETL-tool voor het moderne tijdperk, om gegevens uit verschillende bronnen, waaronder cloud- en on-premise systemen en gegevens van IoT-apparaten, te extraheren en op te slaan in een Hortonworks Hadoop-datameer van Hortonworks datapakket dat gehost wordt op Microsoft Azure.

    Die gegevens worden vervolgens verwerkt en gearchiveerd voordat Talend ze naar een 'gouden laag' pendelt, waartoe datawetenschappers, operationele medewerkers en zakelijke gebruikers toegang hebben via datavisualisatietools. De herbruikbare gegevensbeheerarchitectuur van AB InBev omvat ook open source tools zoals Hive, Spark, HBase en Kafka, zegt Singh.

    De analyse van bier

    Waar AB InBev medewerkers ooit 70% tot 80% van hun tijd besteedden aan het lokaliseren van relevante gegevens over verschillende systemen heen, halen ze nu informatie op voor analyses uit één enkele bron. Het werk is aan de gang, maar Singh is er zeker van dat het dataplatform het personeel van AB InBev positioneert om meer kritische inzichten te verwerven over verkoop, supply chain management, marketing, human resources en andere business lines.

    AB InBev verzamelt bijvoorbeeld consumentengegevens van Nielsen en marktonderzoeken, en bijna-real-time gegevens van sociale media, om trends te analyseren en de juiste bieren en meer gerichte marketingcampagnes te leveren, met inbegrip van real-time coupons op maat van de consument op de plaats van aankoop. AB InBev kan ook de beste locatie in de winkel identificeren om bieren te verkopen, en ook hoe real-time evenementen kunnen worden gecreëerd om meer conversie te stimuleren.

    De onderneming gebruikt analyses om haar supply chain te optimaliseren. De IoT (Internet of Things) gegevens van RFID (Radio-Frequency Identification) apparaten helpen bij het opsporen van zogeheten 'connected packages' om de beste routes te vinden voor bezorgers en bij het regelen van de temperaturen in miljoenen bierkoelers over de hele wereld om ervoor te zorgen dat de producten van AB InBev op de optimale temperatuur worden opgeslagen en geserveerd.

    Singh laat toe dat AB InBev nog steeds werkt met technisch achterstanden die verouderde processen en technologieën omvatten, maar hij schrijft de succesvolle datatransformatie van AB InBev toe aan drie factoren: een cloud first-benadering, het gebruik van gegevens als zakelijk inzicht en de ontwikkeling van herbruikbare processen voor het snel extraheren en toegankelijk maken van gegevens.

    'Uiteindelijk gaat het erom de consument het beste bier en product te brengen', zegt Singh. 'Het gebruik van data is van cruciaal belang voor ons succes in ons streven'.

    Bron: CIO

  • Down to Business: Seven tips for better market intelligence

    business-analysisMaking decisions about product and service offerings can make or break your success as a business. Business owners, executives and product managers need good information and data to make the most informed product decisions.

    This critical information about markets, customers, competitors and technology is called market intelligence. Market intelligence combined with analysis provides market insight and allows better decision making.

    Here are seven tips for better market intelligence:

    1. Develop a process: Your ability to harness, manage and analyze good data is vital to your success. Assure you develop a process for gathering, storing and utilizing market intelligence. Take the time to train your team and invest in a robust market intelligence process. It's an investment with an excellent return.

    2. Gather data when you lose: Often when a company loses an order we ask the salesperson what happened and they offer an opinion. It's important to drill down and really understand why you lost an important order. I recall a situation years ago where a salesperson's opinion was very different from what ultimately was the actual reason we lost this large order. Understanding the real reason for the loss assures you are far more likely to choose correct strategies to win the order in the future. Trust, but verify.

    3. Attend trade shows: You should attend trade shows and use them as a fact-finding mission. Trade shows are like one-stop shopping for market intelligence. There are industry analysts, suppliers, customers and industry media all in one location. Use your time wisely to engage with as many people as possible and utilize your listening skills. It's always best to plan ahead for trade shows, to make the best use of your limited time there. Make sure you stay at the hotel suggested by the show organizers. The "show hotel" may cost a little more than other hotels in the area, but you will have far more opportunities to gather information. You can also consider hiring someone, who does not work for your company, to gather information at trade shows, or speak with an industry analyst. This "stealth mode" of gathering market intelligence can provide added benefits.

    4. Take a customer to lunch: Understanding your customers, their challenges and their perception is one of the best ways to gain market insight. Ultimately it is your customer's perceptions that determine your brand positioning. Spending time with your customers, listening to them and acting on these insights, can provide you with an amazing competitive advantage.

    5. Build a database: Data can be hard to find as time moves forward and people leave an organization. It's worthwhile to build a central database of your market intelligence. By indexing this data it becomes easy for your product managers and executives to have access to the best information when making decisions.

    6. Assure you have good data: It takes good, accurate data for the best results; never forget this point. Good data means better decisions. Accuracy can be improved by using multiple sources and considering how any specific source may be biased. Bad information leads to poor decisions. Ensure you are gathering good data.

    7. Train your team: You cannot gather good data that provides market intelligence unless you have a team of professionals that understands how to gain the best market insights. Assure you have a team that is trained not only on how to gather market intelligence, but how to analyze and use the data for better decision making. As an example we offer a product management boot camp that covers this subject in detail, among others.

    Developing market intelligence takes work as well as a robust methodology. It's not a one-time event, but a continuous process. The absence of good data leads to suboptimal decisions. Good data leads to better decision-making and success for your organization.

  • Ethical Intelligence: can businesses take the responsibilty?

    Ethical Intelligence: can businesses take the responsibilty?

    Adding property rights to inherent human data could provide a significant opportunity and differentiator for companies seeking to get ahead of the data ethics crisis and adopt good business ethics around consumer data.

    The ability for a business to operate based on some amount of intelligence is not new. Even before business owners used manual techniques such as writing customer orders in a book or using calculators to help forecast how many pounds of potatoes might be needed to stock up for next week's sales, there were forms of "insight searching." Enterprises are always looking for operational efficiencies, and today they are gathering more intelligence exponentially.

    A significant part of business intelligence is understanding customers. The more data a company has about its current or prospective customers' wants, likes, dislikes, behaviors, activities, and lifestyle, the more intelligence that business can generate. In principle, more data suggests the possibility of more intelligence.

    The question is: are most businesses and their employees prepared to be highly intelligent? If a company were to reach a state where it has significant intelligence about its customers, could it resist the urge to manipulate them?

    Suppose a social media site uses data about past activities to conclude that a 14-year-old boy is attracted to other teenage boys. Before he discovers where he might be on the gay/straight spectrum, could t social media executives, employees, and/or algorithms resist the urge to target him with content tagged for members of the LGBTQ community? If they knowingly or unknowingly target him with LGBTQ-relevant content before the child discovers who he might be, is that behavior considered ethical?

    Looking for best practices

    Are businesses prepared to be responsible with significant intelligence, and are there best practices that would give a really intelligent business an ethical compass?

    The answer is maybe, leaning toward no.

    Business ethics is not something new either. Much like business intelligence, it evolved over time. What is new though, is that ethics no longer only have to be embedded into humans that make business decisions. It must also be embedded in automated systems that make business decisions. The former, although imperfect, is conceivable. You might be able to hire ethical people or build a culture of ethics in people. The latter is more difficult. Building ethics into systems is neither art nor science. It is a confluence of raw materials, many of which we humans still don't fully understand.

    Business ethics has two components. One is the aforementioned ethics in systems (sometimes called AI ethics) that is primarily focused on the design of algorithms. The other component of business ethics is data ethics, which can be measured from two dimensions: the algorithm and the raw material that goes into the algorithm (that is, the data).

    AI ethics is complex, but it is being studied. At the core of the complexity are human programmers who are usually biased and can have varying ethical frameworks and customs. They may create potentially biased or unethical algorithms.

    Data ethics is not as complex but is not widely studied. It covers areas such as consent for the possession of data, authorization for the use of data, the terms under which an enterprise is permitted to possess and use data, whether the value created from data should be shared with the data's source (such as a human), and how permission is secured to share insights derived from data.

    Another area of data ethics is whether the entire data set is representative of society. For example, is an algorithm determining how to spot good resumes being trained with 80 percent resumes from men and just 20 percent from women?

    These are large social, economic, and historical constructs to sort out. As companies become exponentially more intelligent, the need for business ethics will increase likewise. As a starting point, corporations and executives should consider consent for and authorization of data used in business intelligence. Was the data collected with proper consent? Meaning: does the user really know that their data is being monetized or was it hidden in a long terms and conditions agreement? What were the terms and conditions? Was the data donated, was it leased, or was it "sort of lifted" from the user?

    Many questions, limited answers.

    The property rights model

    Silicon Valley is currently burning in a data ethics crisis. At the core is a growing social divide about data ownership between consumers, communities, corporations, and countries. We tend to anticipate that new problems need new solutions. In reality, sometimes the best solution is to take something we already know and understand and retrofit it into something new.

    One emerging construct uses a familiar legal and commercial framework to enable consumers and corporations to find agreement around the many unanswered questions of data ownership. This construct uses the legal and commercial framework of property as a set of agreements to bridge the growing divide between consumers and corporations on the issues of data ownership, use, and consideration for value derived from data.

    If consumer data is treated as personal property, consumers and enterprises can reach agreement using well-understood and accepted practices such as a title of ownership for one's data, track and trace of data as property, leasing of the data as property, protection from theft, taxation of income created from said data, tax write-offs for donating the data, and the ability to include data property as part of one's estate.

    For corporations and executives, with increasing business intelligence comes increasing business ethics responsibilities.

    What is your strategy?

    Author: Richie Etwaru

    Source: TDWI

  • Facebook to face lawsuit regarding 'worst security breach ever'

    Facebook to face lawsuit regarding 'worst security breach ever'

    Facebook Inc. failed to fend off a lawsuit over a data breach that affected nearly 30 million users, one of several privacy snafus that have put the company under siege.

    The company’s disclosure in September that hackers exploited several software bugs to obtain login access to accounts was tagged as Facebook’s worst security breach ever. An initial estimate that as many as 50 million accounts were affected was scaled back weeks later.

    A federal appeals court in San Francisco, rejected the company’s request to block the lawsuit on June 21 , saying claims against Facebook can proceed for negligence and for failing to secure users’ data as promised. Discovery should move 'with alacrity' for a trial, U.S. District Judge William Alsup said in his ruling. He dismissed breach-of-contract and breach-of-confidence claims due to liability limitations. Plaintiffs can seek to amend their cases by July 18.

    'From a policy standpoint, to hold that Facebook has no duty of care here ‘would create perverse incentives for businesses who profit off the use of consumers’ personal data to turn a blind eye and ignore known security risks', Judge Alsup said, citing a decision a separate case.

    The world’s largest social network portrayed itself as the victim of a sophisticated cyber-attack and argued that it isn’t liable for thieves gaining access to user names and contact information. The company said attackers failed to get more sensitive information, like credit card numbers or passwords, saving users from any real harm.

    Attorneys for users called that argument 'cynical', saying in a court filing that Facebook has 'abdicated all accountability' while 'seeking to avoid all liability' for the data breach despite Chief Executive Officer Mark Zuckerberg’s promise that the company would learn from its lapses. The case was filed in San Francisco federal court as a class action.

    Facebook didn’t immediately respond to a request for comment.

    The Menlo Park, California-based company faces a slew of lawsuits and regulatory probes of its privacy practices after revelations in early 2018 that it allowed the personal data of tens of millions of users to be shared with political consultancy Cambridge Analytica. As lawmakers have focused greater scrutiny on the company, Zuckerberg called for new global regulation governing the internet in March, including rules for privacy safeguards.

    The case is Echavarria v. Facebook Inc., 3:18-cv-05982 , U.S. District Court, Northern District of California (San Francisco).

    Author: Kartikay Mehrotra and Aoife White

    Source: Bloomberg

  • Four Drivers of Successful Business Intelligence

    BICompanies across industries face some very common scenarios when it comes to getting the most value out of data. The life science industry is no exception. Sometimes a company sets out to improve business intelligence (BI) for a brand, division or functional area. It spends many months or years and millions of dollars to aggregate all of the data it thinks it needs to better measure performance and make smart business decisions only to yield more data. In another familiar scenario, a team identifies critical questions the BI system can't answer. Again, months and millions go into development. But by the time the system goes live, market and/or company conditions have changed so much that the questions are no longer relevant.

    Building Better Business Intelligence Systems
    Today's challenges cannot be met by throwing more dollars into the marketing budget or by building more, or bigger, data warehouses. Ultimately, navigating today's complexities and generating greater value from data isn't about more, it's about better. The good news is that other industries have demonstrated the power and practicality of analytics at scale. Technology has evolved to overcome fragmented data and systems. We are now observing a real push in life sciences for a BI capability that's smarter and simpler.

    So how do we build better business intelligence platforms? In working with life sciences companies around the globe, IMS Health has observed a recurring journey with three horizons of business intelligence maturity: alignment of existing KPIs, generation of superior insights and customer-centric execution (see Figure 1).

    What does it take to advance in business intelligence maturity?
    No matter where a company currently stands, there are four fundamental steps that drive BI success: the ability to align business and information management strategy, improving information management systems integration and workflow, engineering BI systems to derive more value and insights from data, and making the most of new cloud computing technologies and Software-as-a-Service (SaaS) models for delivery.

    Step 1: Align Business and Information Management Strategy
    Many IT and business leaders recognize that the traditional "build it and they will come" mentality can no longer sustain future growth in agile and cost-efficient ways. To be successful, companies need to focus upfront on developing an information management strategy that begins with the business in mind. Through a top-down and upfront focus on critical business goals, drivers and pain points, companies can ensure that key insights are captured to drive development of commercial information management strategies that align with prioritized business needs. Leading organizations have achieved success via pilot-and-prove approaches that focus on business value at each step of the journey. To be successful, the approach must be considered in the context of the business and operational strategies.

    Step 2: Improving Information Management Systems Integration and Workflow
    Although technology systems and applications have proliferated within many organizations, they often remained siloed and sub-optimized. Interoperability is now a key priority and a vehicle for optimizing commercial organizations-improving workflow speed, eliminating conflicting views of the truth across departments and paring down vendor teams managing manual data handoffs. Information and master data management systems must be integrated to deliver an integrated view of the customer. When optimized, these systems can enable advanced BI capabilities ranging from improved account management and evolved customer interactions (i.e. account-based selling and management, insights on healthcare networks and relationships with influencers and KOLs) to harnessing the power of big data and demonstrating value to all healthcare stakeholders.

    Step 3: Engineering BI Systems to Derive More Value and Insights from Data
    Life sciences companies compete on the quality of their BI systems and their ability to take action in the marketplace. Yet existing analytics systems often fail to deliver value to end users. Confusing visualizations, poorly designed data queries and gaps in underlying data are major contributors in a BI solution's inability to deliver needed insights.

    By effectively redesigning BI applications, organizations can gain new insights and build deeper relationships with customers while maximizing performance. Effective BI tools can also help to optimize interventions and the use of healthcare resources. They can drive post-marketing research by unearthing early signals of value for investigation, help companies better engage and deliver value to their customers and contribute to improve patient outcomes. This information can advance the understanding of how medicine is practiced in the real world-from disease prevention through diagnosis, treatment and monitoring.

    Step 4: Making the Most of New Cloud Computing Technologies and Software-as-a-Service (SaaS) Models for Delivery
    Chief information officers (CIOs) are increasingly looking to adopt cloud technologies in order to bring the promise of technology to commercialization and business intelligence activities. They see the potential value of storing large, complex data sets, including electronic medical records and other real-world data, in the cloud. What's more, cloud companies have taken greater responsibility for maintaining government-compliant environments for health information.

    New cloud-based BI applications are fueling opportunities for life sciences companies to improve delivery of commercial applications, including performance management, advanced analytics, sales force automation, master data management and the handling of large unstructured data streams. As companies continue their journey toward BI maturity, getting the most from new technologies will remain a high priority. Leveraging cloud-based information management and business intelligence platforms will bring tremendous benefits to companies as approaches are revised amidst changing customer demands and an urgent need for efficiency.

    The Way Forward
    While each organization's journey will be unique, advancing in business intelligence maturity-and getting more value from data - can be achieved by all with these four steps. It's time for BI that's smarter and simpler and that realizes greater value from data. With focus and precision-and the support of business and technology experts-companies can hone in on the key indicators and critical questions that measure, predict and enhance performance.

    Source: ExecutiveInsight

  • Gaining advantages with the IoT through 'Thing Management'

    Gaining advantages with the IoT through 'Thing Management'

    Some are calling the industrial Internet of Things the next industrial revolution, bringing dramatic changes and improvements to almost every sector. But to be sure it’s successful, there is one big question: how can organizations manage all the new things that are part of their organizations’ landscapes?

    Most organizations see asset management as the practice of tracking and managing IT devices such as routers, switches, laptops and smartphones. But that’s only part of the equation nowadays. With the advent of the IoT, enterprise things now include robotic bricklayers, agitators, compressors, drug infusion pumps, track loaders, scissor lifts and the list goes on and on, while all these things are becoming smarter and more connected.

    These are some examples for specific industries:

    ● Transportation is an asset-intensive industry that relies on efficient operations to achieve maximum profitability. To help customers manage these important assets, GE Transportation is equipping its locomotives with devices that manage hundreds of data elements per second. The devices decipher locomotive data and uncover use patterns that keep trains on track and running smoothly.

    ● The IoT’s promise for manufacturing is substantial. The IoT can build bridges that help solve the frustrating disconnects among suppliers, employees, customers, and others. In doing so, the IoT can create a cohesive environment where every participant is invested in and contributing to product quality and every customer’s feedback is learned from. Smart sensors, for instance, can ensure that every item, from articles of clothing to top-secret defense weapons, can have the same quality as the one before. The only problem with this is that the many pieces of the manufacturing puzzle and devices in the IoT are moving so quickly that spreadsheets and human analysis alone are not enough to manage the devices.

    ● IoT in healthcare will help connect a multitude of people, things with smart sensors (such as wearables and medical devices), and environments. Sensors in IoT devices and connected “smart” assets can capture patient vitals and other data in real time. Then data analytics technologies, including machine learning and artificial intelligence (AI), can be used to realize the promise of value-based care. There’s significant value to be gained, including operational efficiencies that boost the quality of care while reducing costs, clinical improvements that enable more accurate diagnoses, and more.

    ● In the oil and gas industry, IoT sensors have transformed efficiencies around the complex process of natural resource extraction by monitoring the health and efficiency of hard-to-access equipment installations in remote areas with limited connectivity.

    ● Fuelled by greater access to cheap hardware, the IoT is being used with notable success in logistics and fleet management by enabling cost-effective GPS tracking and automated loading/unloading.

    All of these industries will benefit from the IoT. However, as the IoT world expands, these industries and others are looking for ways to track the barrage of new things that are now pivotal to their success. Thing Management pioneers such as Oomnitza help organizations manage devices as diverse as phones, fork lifts, drug infusion pumps, drones and VR headset, providing an essential service as the industrial IoT flourishes.

    Think IoT, not IoP

    To successfully manage these Things, enterprises are not only looking for Thing Management. They also are rethinking the Internet, not as the Internet of People (IoP), but as the Internet of Things (IoP). Things aren’t people, and there are three fundamental differences.

    Many more things are connected to the Internet than people

    John Chambers, former CEO of Cisco, recently declared there will be 500 billion things connected by 2024. That’s nearly 100 times the number of people on the planet.

    Things have more to say than people

    A typical cell phone has nearly 14 sensors, including an accelerometer, GPS, and even a radiation detector. Industrial things such as wind turbines, gene sequencers, and high-speed inserters can easily have over 100 sensors.

    Things can speak much more frequently

    People enter data at a snail’s pace when compared to the barrage of data coming from the IoT. A utility grid power sensor, for instance, can send data 60 times per second, a construction forklift once per minute, and a high-speed inserter once every two seconds.

    Technologists and business people both need to learn how to collect and put all of the data coming from the industrial IoT to use and manage every connected thing. They will have to learn how to build enterprise software for things versus people.

    How the industrial IoT will shape the future

    The industrial IoT is all about value creation: increased profitability, revenue, efficiency, and reliability. It starts with the target of safe, stable operations and meeting environmental regulations, translating to greater financial results and profitability.

    But there’s more to the big picture of the IoT than that. Building the next generation of software for things is a worthy goal, with potential results such as continually improving enterprise efficiency and public safety, driving down costs, decreasing environmental impacts, boosting educational outcomes and more. Companies like GE, Oomnitza and Bosch are investing significant amounts of money in the ability to connect, collect data, and learn from their machines.

    The IoT and the next generation of enterprise software will have big economic impacts as well. The cost savings and productivity gains generated through “smart” thing monitoring and adaptation are projected to create $1.1 trillion to $2.5 trillion in value in the health care sector, $2.3 trillion $11.6 trillion in global manufacturing, and $500 billion $757 billion in municipal energy and service provision over the next decade. The total global impact of IoT technologies could generate anywhere from $2.7 trillion to $14.4 trillion in value by 2025.

    Author: Timothy Chou

    Source: Information-management

  • Healthcare analytics and the opportunities to improve patient care

    Healthcare analytics and the opportunities to improve patient care

    Healthcare: everyone needs it, it’s a rapidly technologizing industry, and it produces immense amounts of data every day.

    To get a sense of where analytics fit into this vital market, Sisense interviewed Hamza Jap-Tjong, CEO and Co-Founder of GeriMedica Inzicht, a GeriMedica subsidiary. GeriMedica is a multi-disciplinary electronic medical record (EMR) company servicing the elderly care market and as such, their SaaS platform is filled with data of all kinds. Recently, they rolled out analytics that practitioners could use to improve the quality of care (versus the prior main use case in healthcare analytics, which was done by the billing and finance departments). This helps keep practitioners focused on helping patients instead of spending (wasting) hours in a software product. Hamza opened up about the state of healthcare analytics, how it can improve care for patients, and where the industry is going.

    The state of healthcare analytics

    As previously mentioned, the healthcare industry creates tons of data every day from a wide array of sources.

    'I think tons of data might be an understatement', says Hamza, citing a Stamford study. 'They were talking about data on the scale of exabytes (an exabyte equals a billion gigabytes). Where does all that data come from? Fitbits, iPhones, fitness devices on your person… healthcare data is scattered everywhere: not only treatment plans and records created by practitioners, but also stored in machines (X-rays, photographs, etc.)'.

    Data is the new oil, but without the right tools, the insights locked in the data can’t help anyone. At present, few healthcare organizations (let alone frontline practitioners) are taking advantage of the data at their disposal to improve patient care. Moreover, these teams are dealing with amounts of information so vast that they are impossible to make sense of without help (like from a BI or analytics platform). They can’t combine these datasets to gain a complete picture without help, either. Current software offerings, even if they have some analytical capabilities for the data that they capture, often can’t mash it up with other datasets.

    'In my opinion, we could really improve the data gathering', says Hamza. 'As well as the way we use that data to improve patient care. What we know is that when you look at doctors, nurses, physical therapists, everybody close to care processes and patients, is hankering for data and insights and analytics and we see that at the moment there isn’t a tool that is good enough or easy enough for them to use to gain the insights that they are looking for'.

    Additionally, the current generation of medical software has a high barrier to entry/learning curve when it comes to getting useful insights out. All these obstacles prevent caregivers from helping clients as much as they might be able to with analytics that are easier to use.

    Improving patient care (and improving analytics for practitioners)

    Analytics and insight-mining systems have huge potential to improve patient care. Again, healthcare data is too massive for humans to handle unaided. However, there is hope: Hamza mentioned that AI systems were already being used in medical settings to aggregate research and present an array of options to practitioners without them having to dig through numerous sources themselves.

    'Doctors or nurses usually don't work nine-to-five. They work long shifts and their whole mindset is focused on solving mysteries and helping the patients. They don't have time to scour through all kinds of tables and numbers. They want an easy-to-understand dashboard that tells a story from A to Z in one glance and answers their question'.

    This is a huge opportunity for software and analytics companies to help improve patient care and user experience. Integrating easy-to-understand dashboards and analytics tools within medical software lowers the barrier to entry and serves up insights that practitioners can use to make better decisions. The next step is also giving clinicians the right tools to build their own dashboards to answer their own questions.

    The future of healthcare analytics

    Many healthcare providers might not know how much analytics could be improving their work and the care they give their patients. But they certainly know that they’re spending a lot of time gathering information and putting it into systems (and, again, that they have a ton of data). This is slowly changing today and will only accelerate as time goes on. The realization of how much a powerful analytics and BI system could help them with data gathering, insight harvesting, and providing better care will drive more organizations to start using a software’s analytics capabilities as a factor in their future buying decisions.

    Additionally, just serving up insights won’t be enough. As analytics become more mainstreamed, users will want the power to dig into data themselves, perform ad hoc analyses, and design their own dashboards. With the right tools and training, even frontline users like doctors and nurses can be empowered to create their own dashboards to answer the questions that matter most to them.

    'We have doctors who are designers', says Hamza. 'They are designing their own dashboards using our entire dataset, combining millions of rows and records to get the answers that they are looking for'.

    Builders are everywhere. Just as the healthcare space is shifting away from only using analytics in financial departments and putting insights into the hands of frontline practitioners, the right tools democratize the ability to create new dashboards and even interactive analytics widgets and empower anyone within an organization to get the answers and build the tools they need. Such as many other industries, healthcare has to go through a technological transformation.

    Creating better experiences

    When it comes to the true purpose of healthcare analytics, Hamza summed it up perfectly:

    'In the end, it’s all about helping end users create a better experience'.

    The staggering volume of data that the healthcare industry creates presents a huge opportunity for analytics to find patterns and insights and improve the lives of patients. As datasets become more massive and the analytical questions become more challenging, healthcare teams will rely more and more on the analytics embedded within their EMR systems and other software. This will lead them to start using the presence (or lack thereof) and quality of those analytics when making decisions. Software companies that understand this will build solutions that answer questions and save lives, the ones that don’t might end up flatlining.

    Author: Jack Cieslak

    Source: Sisense

  • How business analytics can benefit your business strategically

    How business analytics can benefit your business strategically

    Business analytics can provide companies with an accurate and holistic view of their business. Executives and managers now have the ability to use data for real-time, actionable insights into everything from customer buying patterns to inventory management without having to rely on IT for outdated, static reports. In this blog, we discuss five strategic benefits of business analytics .

    Strategic benefit of business analytics 1: staff will have faster access to data

    Comparison: A conservative wait time for an IT generated report is two days. In today’s fast-paced world, a lot can change in two days and usually by the time reports are received, the data is out-of-date. Your executives and managers need to be able to access up-to-date data in order to make quick decisions that will maintain your competitive advantage.

    How would your business look: With access to up-to-date data, your sales team is empowered when interacting with prospects. Over time, this will lead to increased revenue opportunities as sales staff become aware of what customers are buying and, more importantly, what they are not buying. With this data at their fingertips, your sales managers are able to monitor their teams’ performance on a daily basis to identify and implement strategies to improve performance overall. With an easy-to-learn and intuitive BI tool like Phocas, the typical ROI timeframe is between 2-4 months after implementation, but can sometimes be even faster. 

    Strategic benefit of business analytics 2: increase customer acquisition and retention

    Comparison: Sales reps rely on the right information in the right moment. Providing your reps with potentially outdated data may result in your reps wasting time as they hunt for current facts or figures. This could result in lost sales opportunities.

    How your business would look: Armed with current, relevant access to data, your reps are able to engage in more meaningful conversations that are of real value to your customers. By having  data on customer behavior patterns, previous customer feedback, customer preferences, and buying habits, your reps will know what your customers truly want and have the ability to demonstrate the value of your product or service to them. When prospects feel heard, they are more inclined to become loyal and satisfied customers. A quality BI tool will be accessible from mobile devices ensuring your reps have access to your data even when they are out of the office.

    Strategic benefit of business analytics 3: measure the effectiveness of campaigns  

    Comparison: Traditional marketing efforts are a game of trial and error. Businesses implement a strategy and wait to see if their efforts pay off. If sales increase, it’s assumed the strategy is successful. If not, the strategy is tweaked or scrapped for a new plan-of-action.

    How your business would look: BI empowers you to design, monitor, and evaluate the success of your promotional and marketing campaigns by offering real-time insight into how customers are reacting to them. By identifying which campaigns receive the best responses, you can streamline your marketing budget and allocate funds for the best ROI. If a campaign is not generating a positive response, you are able to quickly reorganize the promotion or customize the campaign message accordingly.

    Strategic benefit of business analytics 4: New sales opportunities will regularly present themselves

    Comparison: An Excel spreadsheet can inform your team that sales for a specific product are up, but it can’t clarify whether a specific color or other characteristic is performing better than others. Nor can spreadsheets indicate why certain products are underperforming. BI provides businesses with the ability to quickly evaluate data to identify sales issues and opportunities more effectively than ever before. 

    How your business would look: BI allows your team to quickly detect emerging sales trends by analyzing company data on customers as well as various market conditions. Your team will have the ability to swiftly visualize detailed changes in customer behaviors to reveal emerging opportunities.  By leveraging these insights, sales teams can improve the accuracy of their sales predictions and respond accordingly.

    Strategic benefit of business analytics 5: More stock moving off the shelves

    Comparison: Static reports identify the quantity of a product a company has on hand when the report is generated, and which products are slow moving or have become dead stock sitting in your warehouse graveyard. However, these reports cannot identify the cause of slow moving or dead stock, nor prevent future dead stock. It’s difficult for a company to avoid this situation without a tool in place to accurately monitor the purchasing process.

    How your business would look: BI can help you to isolate poor purchasing decisions because you are no longer relying on outdated static reports. With BI you are able to monitor inventory-to-purchase ratio, stock turns, and slow-moving stock by product, territory, or manufacturer. With BI, you are able to refine your inventory management processes. By identifying product selling patterns, you are able to reduce excess inventory and the cost to maintain it. Visualizations provide a clear picture of how much to order, when, and at what price. In addition to ensuring your stock moves, your managers are able to utilize the information to effectively adjust pricing tiers to increase your profit margins.

    Having your customer, sales, and inventory data at your fingertips gives you leverage to rapidly adapt to an ever-changing sales climate. With the right Business Intelligence tool in place companies are able to increase profit margins, reduce spending, and achieve competitive excellence.

    Source: Phocas Software

  • Microsoft takes next cybersecurity step

    Microsoft takes next cybersecurity step

    Microsoft just announced they are dropping the password-expiration policies that require periodic password changes in Windows 10 version 1903 and Windows Server version 1903.  Microsoft explains in detail this new change and the rationale behind it, emphasizing that they support layered security and authentication protections beyond passwords but that they cannot express those protections in their baseline.  

    Welcome move

    This is a most welcome step. Forcing users to change their passwords periodically works against security, it means consumers have to write them down to remember them and it does nothing to stop hackers from stealing current passwords. Hackers generally use stolen passwords very quickly, and password complexity does little to prevent use of stolen passwords either, since hackers can just as easily capture or steal a complex password as they can a simple one.

    The time has long passed for organizations to stop relying on interactive passwords that users have to enter altogether. Hopefully this move by Microsoft will help move the transition to more secure forms of authentication. Finally a big tech company (that manages much of our daily authentication) is using independent reasoned thinking rather than going along with the crowd mentality when the crowd’s less secure password management practices are, however counterintuitive, less secure.

    Alternative authentication forms and decentralized identity (DID)

    Biometrics on their own can also be hacked. So can one time Passwords, especially those that use SMS and other authentication methods where man-in-the middle or man-in-the browser attacks are possible. What is more secure (and private) is another method Microsoft and many other organizations are starting to support: Decentralized Identities, where users control their own identity and authentication information.

    Using this method, the user’s credential and identity data is maintained in a hardened enclave only accessible to the user using their own private key that is typically unlocked using the user’s private mobile phone and optionally another authentication factor. In the end, the consumer just gets a notice from the site they are trying to log into to confirm the log in on their mobile phone (or other device) by just clicking 'yes' (to the login request) or additionally and optionally by using a biometric, e.g. a fingerprint or an iris scan.

    The bottom line is there is layered user authentication and the user doesn’t have to remember or enter an insecure password. And most importantly the user owns their own secured credential and identity data and no one can access it without user permission.

    Decentralized identities, the path to individual control

    DIDs are supported by many organizations today. Most (but not all) mega tech companies are joining the move to standardize DID technology. The companies not joining are generally the ones that continue to earn a living by monetizing consumer data, largely through advertising and data resell activities.  Adding fuel to the fire, some of these companies have an abysmal record when it comes to securing consumer data.

    Hopefully consumers will start protesting the monetization of their data by adopting DID as an authentication mechanism. It’s certainly a chicken and egg problem but there is gradual adoption across sectors. For example, even the Bitcoin network just started accepting DIDs, and British Columbia in Canada has also implemented them for small business identification.

    Web 3.0

    For sure, I will gladly sign up for a DID as soon as someone asks me too. I really am at my limit in tolerating password management policies. And I’m even more tired of being subject to continuous massive data breaches that steal my most personal and sensitive information, just because I live and transact.

    I don’t think anything else short of a massive re-architecting of the web and how we manage identity data will solve all these problems of data breaches and consumer data monetization and abuse.

    Author: Avivah Litan

    Source: Gartner

  • MicroStrategy: Take your business to the next level with machine learning

    MicroStrategy: Take your business to the next level with machine learning

    It’s been nearly 22 years since history was made across a chess board. The place was New York City, and the event was Game 6 of a series of games between IBM’s “Deep Blue” and the renowned world champion Garry Kasparov. It was the first time ever a computer had defeated a player of that caliber in a multi-game scenario, and it kicked off a wave of innovation that’s been methodically working its way into the modern enterprise.

    Deep Blue was a formidable opponent because of its brute force approach to chess. In a game where luck is entirely removed from the equation, it could run a search algorithm on a massive scale to evaluate move, discarding candidate moves once they proved to be less valuable than a previously examined and still available option. This giant decision tree powered the computer to a winning position in just 19 moves with Kasparov resigning.

    As impressive as Deep Blue was back then, present-day computing capabilities are much stronger, by orders of magnitude, inspired by the neural network of the human brain. Data scientists create inputs and define outputs detect previously indecipherable patterns, important variables that influence games, and ultimately, the next move to take.

    Models can also continue to ‘learn’ from playing different scenarios and then update the model through a process called ‘reinforcement learning’ (as the Go-playing AlphaZero program does). The result of this? The ability to process millions of scenarios in a fraction of a second to determine the best possible action, with implications far beyond the gameboard.

    Integrating machine learning models into your business workflows comes with its challenges: business analysts are typically unfamiliar with machine learning methods and/or lack the coding skills necessary to create viable models; integration issues with third-party BI software may be a nonstarter; and the need for governed data to avoid incorrectly trained models is a barrier to success.

    As a possible solution, one could use MicroStrategy as a unified platform for creating and deploying data science and machine learning models. With APIs and connectors to hundreds of data sources, analysts and data scientists can pull in trusted data. And when using the R integration pack, business analysts can produce predictive analytics without coding knowledge and disseminate those results throughout their organization.

    The use cases are already coming in as industry leaders put this technology to work. As one example, a large governmental organization reduced employee attrition by 10% using machine learning, R, and MicroStrategy.

    Author: Neil Routman

    Source: MicroStrategy

  • Only Half of Companies Actually Use the Competitive Intelligence They Collect

    jan16-26-128244186For more than 30 years, most large corporations worldwide have adopted competitive intelligence (CI) as a way to expedite good decisions. And yet for almost every company that uses CI in their decision-making, there’s another that disregards CI’s mix of industry analysis, rival positions, and market insight to their detriment.

    We recently conducted a survey of CI managers and analysts who’ve been through our training program to see how much their findings influenced major company decisions, and why. We received 236 responses from 21 industries in U.S. and European corporations, from CI-trained analysts in marketing, business development, strategy, R&D, finance, and other fields. They had an average of 6.3 years of experiencing in using CI frameworks and tools, and 62% were from companies with over $1 billion in annual sales revenues.

    We found that 55% of our respondents said that their input on major management decisions made enough difference to improve the decision. But 45% said their CI analysis did not.

    Why did some analysts have their input incorporated, while others didn’t? Our survey suggested several key reasons.

    First, many executives decide on a course of action and then use CI to ratify their choice. When asked, “What percent of your reports do you feel are just ‘confirmatory’ for an executive who already made a decision?” a full one-third of our respondents claimed “high” or “very high.” In these cases, the analysis may just be an obligation to be checked off a list.

    We also ran several simple OLS regression models and tested more than two dozen variables to see if they affected which companies actually allowed their CI analyses to influence their decisions. At the end, we found four variables turned out to be highly significant in explaining the difference in impact.

    1. The analyst was assigned a “sign-off” authority over major decisions. The single most effective way to ensure intelligence is used in any given decision is to give the analyst a say in moving it forward. In practical terms this means the analyst – not just the PowerPoint deck – becomes part of discussions leading to the decision. That is the one area where “intelligent organizations” differ most from others.

    2. Management was open to perspectives that were different from the internal consensus. Management that was more open to different perspective was also more likely to ask the analyst for the “big picture” rather than just the data.

    3. The analyst’s report called for proactive action more than reaction. Most companies are reactive by nature, and a lot of intelligence is about reacting to competitors’ moves. However, the decisions that matter more may well be those that are proactive. When the analyst provided proactive recommendations, the analysis had more of an impact.

    4. The analyst was involved in product launches. We don’t know why analysts in this area felt particularly impactful, but we do know that competitive intelligence is highly popular in tactical areas, and that product launches are an area where companies are most worried about competitors’ responses; successful product launches depend on correctly gauging the response of other players in the market. These include, naturally, customers and competitors, but also the less obvious responses by distribution channels, regulatory authorities, and influencing agents. Lack of insightful anticipation of these reactions — which is where competition analysts have the greatest expertise — leads to many more failures than there should be. Perhaps the analysts involved with product launches are thus given more of a mandate than analysts involved in other kinds of activities.

    None of these steps involves spending millions on the intelligence or hiring legions of analysts. And overall, these four variables explained a respectable 40% of the variability in having an impact on decisions. In terms of magnitude of the effect, the simple “sign off” requirement from management was clearly the leading contributor to explaining variability of impact.

    For these decisions – the ones that were improved by competitive intelligence — CI analysts reported many applications of their insights. While product launches were over-represented, our respondents told us about a wide array of applications for their analyses. They were evenly distributed between pursuing opportunities (46%) and reducing risks (44%), and ran the gamut from product pricing and features, capex investments, manufacturing processes, market expansion, joint ventures, M&A, and more.

    For example, in the pharmaceutical industry, respondents said that use of competitive intelligence had either saved or generated millions through discontinuing ineffective drug development efforts, walking away from bad deals and/or licensing opportunities, or accelerating new drug development based on what competitors were doing. For example, as one told us, “We accelerated our orphan disease program, based on accurate prediction of rival expected entry.”

    A common theme across industries was the smart reallocation of resources. One analyst told us that their company had stopped development on a project that was consuming lots of local resources after the analysis indicated it wouldn’t be effective. They then re-applied those resources to an area with true growth potential — that area is now starting to take off. In a different company, an analysis led to the cancellation of an extremely high-risk R&D program.

    This is not to discount the importance of ratifying a current course of action. In one of our favorite answers to our open-response question, an analyst described how CI had “identified only a single competitor, while determining others did not have the business case to continue a pursuit.” But it’s clear to us from this and other surveys we’ve done that the companies that get the most out of CI use it for a wide array of purposes – and actually let it shape their decisions.

    Source: Harvard Business review

  • Recognizing the mismatch between your analytics platform and your business

    Recognizing the mismatch between your analytics platform and your business

    It’s no secret that analytics confers a significant competitive advantage on companies that successfully implement BI platforms and drive key decision making with data. Yet, many organizations struggle in this endeavor. So, why aren’t more analytics and BI implementations delivering results? No one believes that you can simply install analytics and BI software and magic will occur. It is understood that a successful implementation requires two other ingredients: people (end users) and processes (collaboration). The magic only happens when you have alignment on all three elements: the right people, the right processes, and the right tools.

    But what if you knew you had the best and brightest on your staff? And what if they were hungry to solve the organization’s most pressing challenges with data? What if the reason the BI implementation was failing was not the users or their willingness to work together, but that they were using the wrong analytics platform? What if the solution chosen as the centerpiece of an analytics strategy was not fit for duty?

    Watch for the signs

    Consider the following scenario: You finally chose the analytics platform that you hoped would propel your organization to success. At first, everything seemed fine. You went through dozens of stakeholder reviews and witnessed countless vendor demos. You spoke to your executive team, ITleaders, and line-of-business managers. You eliminated the platforms that seemed too complicated for the task and the ones that didn’t quite have the horsepower for your enterprise needs. Plus, the CEO loved the attractive visualizations and report templates included out-of-the-box.

    But now you are halfway through the implementation, and you are starting to see the signs that things are not going entirely to plan. You have the feeling that nothing has really changed in the way people go about their work and that the business has not made any significant progress. You look around and begin to feel that the BI application you selected may not have been the best choice. The following are four signs that you may have chosen the wrong platform:

    1. The content tells you answers everyone already knows

    Everybody loves pie charts. And column charts. And scatter plots. Any visualization is fantastic. However, visualizations are simply representations of data, and they often tell you what you already know. For example, say you have a pie chart on a dashboard that shows your top 10 customers by geography. It will wow you at first, but the novelty wears thin when you realize you already knew your top accounts. What you’d like to do is ask the next questions? What’s the year-over-year change in customers? Why am I losing them or keeping them? Can I take my highest performing salespeople and see why they are successful compared to the unsuccessful ones? If your platform gives you attractive charts, but only offers a modicum of analytic depth, you’ll be left hungry for more.

    2. People are not using it

    Imagine that an analyst has a beautiful chart based on data from your accounting system showing product sales over the last three trailing quarters. But the chart doesn’t tell her about profitability in the next 3 months, or the reasons for profitability. It only gives her the obvious answers.

    So, she reviews a separate profit and loss report (usually a grid of figures), cuts and pastes the data into Excel, applies a forecast algorithm, and then plops that into a PowerPoint to share with the VP of sales. Worse yet, she extracts it out of the accounting tool as raw data because the data in the BI platform was both stale and slightly incorrect. In short, she uses anything other than your company’s expensive analytics tool to produce the real insights. If your employees are not using the platform to make decisions, it risks becoming shelfware.

    A provider of well-known BI platform likes to promote its high number of 'active touches'. What’s alarming is that the vendor considers an active touch to be once-a-month use. So, here are a few questions: Is a person actively communicating if they’re only checking their email once per month? Are you considered worldly if you only check the news once per month? Similarly, are your employees 'data-driven' if they’re only checking their analytics once per month? A successful implementation requires active use of data, and people should have a natural need to use it.

    3. Your tool is too simplistic to answer complex business questions; or, it’s too complicated for people to actually use

    You purchased the platform to accelerate speed-to-insight, not slow it down. However, if you find that your platform merely generates visualizations that don’t trigger meaningful action, then your analytics tool lacks sophistication. Data visualizations cannot make decisions for you, they simply provide representations of data. However, if the visualization is inherently unsophisticated, or simply restates the obvious, it’s just a pretty picture. And if the analytics tool doesn’t give you the ability to interrogate the data beyond the static (or lightly interactive) visualizations, or you need expert help to answer the question, that’s a problem. Your users require something more sophisticated if they’re going to use it. Difficult business questions require sophisticated tools.

    Many analytics platforms tools are rudimentary by design in an attempt to cater to the lowest common denominator (the casual user who only lightly consumes information). Yet they alienate the users that want more than just attractive visualizations. Some platforms cater to the 5% of users who demand high-powered analytics, the data scientists among the userbase. However, this yet again alienates the majority of users because the tool is too difficult or time-consuming to learn. Analytics is a continually evolving exercise. You need to be constantly thinking about the next question and the next question after that. And the next question cannot come at a tremendous cost, it cannot be a development project that constrains decisions.

    For an analytics implementation to truly work, it needs to cater to the 80% in the middle group of users. The ideal platform finds that middle ground. It provides you with a friendly UI that the average user can appreciate, but plumbs in sophisticated analytics, with simplicity, so advanced users can explore greater depths of sophistication and answer the tough business questions. The art is activating the 80%, those that need more than nothing, but less than everything.

    4. The confidence in your insights and analysis is low

    Now, more than ever, users need data to inform their decisions, and they need to be able to trust the data. Desktop-based tools allow users to build their own content entirely untethered from the organization, regardless of whether the underlying data or analytics is accurate or not. This causes downstream problems and sows distrust in the integrity of the data. No one can act on information without confidence in the people, processes, and tools. Analytic platforms should provide governance capabilities to manage data from a centrally administered repository so that analysis can be reproducible and defensible. It should provide the means to trace the origins of the data, the techniques used to examine it, and the individuals who prepared the analysis.

    The dangers of picking the wrong analytics platform

    Often, data visualization platforms are purchased when 'analytics' is merely a check box. The platforms may provide the ability to build and show data representations, but they seldom go deep enough. A serious analytics platform lets you and your business users ask the next big question, and the next one after that. And the questions are never simple. If the answer is obvious, they usually don’t need to be asked.

    If you made a purchasing decision with analytics as an afterthought, you will see the signs with time. It could mean that your efforts won’t deliver meaningful value or, worse yet, that your efforts will utterly fail. So, if you are serious about your analytics, then get a serious analytics platform.

    Author: Avi Perez

    Source: Pyramid Analytics

  • Rubrik is data resilience leider in nieuwste Forrester report

    data science rubrik

    Rubrik is data resilience leider in nieuwste Forrester report

    In de nieuwste editie van het Forrester Wave-rapport over Data Resilience Solutions is Rubrik benoemd tot leider. De aanbieder op het gebied van multi-cloud data control kreeg zelfs de hoogste score toegekend op het gebied van strategie.

    Forrester heeft tien vendoren geëvalueerd op basis van veertig criteria, die weer zijn onderverdeeld in drie categorieën: huidige aanbod, strategie en aanwezigheid in de markt. Rubrik behaalde de hoogst mogelijke score op het gebied van strategie en security.

    'Rubrik past bij bedrijven die erop uit zijn om hun data resilience te vereenvoudigen, moderniseren en consolideren', aldus het rapport. Rubrik wordt omschreven als een ‘eenvoudige, intuïtieve en krachtige policy engine die de bescherming van data regelt, ongeacht het soort, de locatie of het doel van de data'.

    Volgens CEO Bipul Sinha van Rubrik laat de erkenning laat zien dat Rubrik goed is gepositioneerd om de transformatie van de data management-markt te leiden. 'Klanten stellen steeds hogere eisen aan data management-oplossingen, die verder gaan dan alleen back-up en recovery. Dat we de hoogste score hebben gekregen op het gebied van strategie bevestigt dat we op de juiste weg zijn om door middel van innovatie steeds beter te voldoen aan de vraag van onze klanten'.

    Bron: BI Platform

  • Stijging privacyklachten over persoonlijke data in de eerste helft van 2019

    Stijging privacyklachten over persoonlijke data in de eerste helft van 2019

    Het aantal privacyklachten blijft sterk toenemen. Ruim 15.000 mensen hebben in de eerste zes maanden van dit jaar een klacht ingediend bij de Autoriteit Persoonsgegevens (AP), zo laat de toezichthouder weten. Dat is bijna 60% meer dan in de tweede helft van 2018.

    Mensen lopen volgens de AP vast bij een inzageverzoek of het verzoek om verwijdering van hun gegevens. Dit speelt vooral bij zakelijke dienstverleners, zoals energieleveranciers en de detailhandel.

    15.313 mensen hebben in het eerste zes maanden van 2019 een privacyklacht ingediend bij de AP. Ook het aantal internationale klachten is sterk toegenomen. Een verklaring voor het grote aantal klachten is dat de mogelijkheid om privacyklachten in te dienen nieuw is in Nederland en steeds bekender wordt.

    De AP heeft in de eerste helft van 2019 ruim 10.000 klachten afgerond. Veel telefonische klachten konden direct naar tevredenheid worden behandeld. In veel andere gevallen hebben medewerkers van de AP mensen op weg geholpen om zelf een klacht in te dienen bij de organisatie waarover de klacht gaat. Er lopen 68 onderzoeken naar aanleiding van een veelvoud aan klachten.

    Zakelijke dienstverleners (46%), de overheid (14%) en de IT-sector (13%) zijn de sectoren waarover de AP de meeste klachten ontvangt. Bij zakelijke dienstverleners, zoals energieleveranciers en de detailhandel gaan de klachten vooral over privacyrechten van mensen en direct marketing. Bij de overheid komt de rechtmatigheid van gegevensverwerkingen het meest aan de orde.

    Voorzitter Aleid Wolfsen vindt dat er een structurele oplossing moet komen, ‘zodat we de klachten adequaat kunnen blijven behandelen. De bescherming van je privacy is een grondrecht, het mag nooit een wassen neus worden.’

    Iedereen kan sinds 25 mei 2018 een privacyklacht indienen bij de Autoriteit Persoonsgegevens (AP). Dat kan als iemand vermoedt dat zijn of haar persoonsgegevens zijn verwerkt op een manier die in strijd is met de privacywet.

    Bron: Emerce

  • The 5 dimensions that help your business with a successful technological transformation

    The 5 dimensions that help your business with a successful technological transformation

    Businesses that have mastered the ability to change quickly share one common denominator: technology is transforming their business. Technology can be a transformative engine that gives your organization the power to learn, adapt and respond at the pace of change.

    Today’s IT leaders have many tools to enable speed and flexibility, including Lean IT, Agile, DevOps and Cloud First among others. However, these concepts alone rarely deliver the technology transformation that organizations need because companies are tempted to think of transformation as a predominately organizational journey. Organizations need to think much more holistically in order to lead a technology transformation and enable a flexible and efficient business.

    There are five essential components, the 5 dimensions, that can lead to a successful technology transformation. Each dimension allows you to learn something unique about your organization, somewhat similar to an archeologist digging through an archeological tell. The 5 dimensions can be used to drive a holistic technology transformation that fits your historical and cultural context.

    Here's a brief look at the 5 dimensions and how they can serve you:

    1. Business alignment 

    Far too many organizations build their technology strategies by aligning with the tactics of their business operations. The result is strategic dissonance, as IT resources are not correctly prioritized to meet strategic business priorities. This misalignment leads to new architectural debt. Today's tech leaders need to understand the organization's business model and build a technology strategy that unlocks and empowers that model, ensuring alignment along the way.

    2. Architectural debt 

    Most organizations suffer from technical debt: systems built for expediency instead of best practices. Architectural debt, on the other hand, is the systemic root cause in the creation of technical debt. A recent survey by IDG and Insight Enterprises found that 64% of executives cited legacy infrastructure and processes as a barrier to IT and digital transformation. ‘Legacy infrastructure and processes’ is just another way of describing architectural debt. Debt is an important concept for technology organizations because it constrains flexibility and results in an IT organization managed by the inertia of their systems. If you want to lead an IT or digital transformation, you must quantify your architectural debt and pay down (minimize) or pay off (eliminate) that debt in order for your transformation to be both successful and sustainable.

    3. Operational maturity 

    IT organizations exist on a spectrum of maturity, classified into three distinct phases: operators, automators, and innovators. Operational maturity is a critical enabler of an organization’s ability to execute their vision or goals. There is a high correlation between business value and operational maturity. Mature IT organizations are focused on high quality, business value-added activities. An IT organization’s capabilities directly correlate with its phase of maturity along our spectrum. You must look at the people, processes, technologies and artifacts to understand where change must occur in order to increase operational maturity.

    4. Data maturity

    Clive Humby, U.K. mathematician and architect of Tesco's clubcard, famously said in 2006 that 'Data is the new oil… It’s valuable, but if unrefined it cannot really be used'. Nearly a decade later, The Economist called data the world’s most valuable resource. Many organizations are sitting on mountains of unrefined data, uncertain how they should be storing, processing or utilizing that valuable resource. Top-performing organizations that are using data to drive their business and technology decisions have a distinct competitive advantagetoday and tomorrow.

    5. Organizational dexterity 

    Your organization’s capacity for innovation and change directly correlates with its dexterity. To quote Peter Drucker: 'In times of turbulence, the biggest danger is to act with yesterday’s logic'. Organizations falter when they have institutionalized a culture of yesterday’s logic. An agile organization isn’t just a decentralized organization, it’s an organization that has the capability to learn and unlearn, demonstrates complex problem solving, emotional intelligence and much more.

    We live and work in turbulent times, with more volatility on the horizon. Is your technology ready? How about your organization? The 5 dimensions play a critical role in building a holistic understanding of your organization. Seeing the whole picture enables you to build a pragmatic path forward that leads to a true technology transformation.

    Author: Alex Shegda

    Source: Information-management

  • The data management issue in the development of the self-driving car

    The data management issue in the development of the self-driving car

    Self-driving cars and trucks once seemed like a staple of science fiction which could never morph into a reality here in the real world. Nevertheless, the past few years have given rise to a number of impressive innovations in the field of autonomous vehicles that have turned self-driving cars from a funny idea into a marketing gimmick and finally into a full-fledged reality of the modern roadway. However, a number of pressing issues are still holding these autonomous vehicles back from full-scale production and widespread societal embrace. Chief amongst them is the data management challenge wrought by self-driving vehicles.

    How should companies approach the dizzying data maze of autonomous vehicles? Here’s how to solve the data management of self-driving cars, and what leading automotive companies are already doing.

    Uber and Lyft want to release self-driving cars on the public

    Perhaps the most notable development in the creation of autonomous vehicles over the past few years has been that Uber and Lyft have both recently announced that they’re interested in releasing self-driving cars to the general public. In other words, these companies want autonomous vehicles that are navigating complex city environments by themselves and without the assistance of a human driver who can take over in the event of an emergency.

    Uber has already spent a whopping $1 billion on driverless cars, perhaps because the ridesharing app relies heavily on a workforce of freelancers who aren’t technically considered full-time employees. It could be that Uber and other companies see a financial imperative in automating their future workforce so that they don’t have to fret about providing insurance and other benefits to a large coterie of human employees. Whatever the company’s motivations, Uber has clearly established itself as a leader in the self-driving car space where investments are concerned and will continue to be a major player for the foreseeable future.

    Other companies like Ford may have the right idea, as they’re moving in the opposite direction of Uber and trying to take things slowly when debuting their autonomous vehicles. This is because Ford believes that solving the data management challenge of self-driving cars takes time and caution more than it does heavy spending and ceaseless innovation. Ford’s approach opposite to Uber's approach to self-driving cars could pay off too, as the company has avoided the disastrous headlines that have followed Uber everywhere when it comes to testing and general brand PR.

    We can learn from Ford in one regard: haste. Though important when delivering a product to market, it often results in shoddy production that leads to costly mistakes. The company is deciding to take things slow when it comes to collecting and managing data from auto insurance companies, which is a standard others should be following if they don’t want to get in over their heads. Ford’s focus on creating data 'black boxes' not dissimilar to those on airplanes, which can be consulted in the event of a major crash or incident for a data log of what occurred, is going to become a standard feature of autonomous vehicles before long.

    It’s a matter of trust

    It’s going to become increasingly obvious over the next few years that solving the data management challenges wrought by the advent of self-driving cars is going to be a matter of trust. Drivers need to be certain that their cars aren’t acting as surveillance devices, as does society broadly speaking, and manufacturers need to be taking steps to build and strengthen trust between those who make the car, those whose data the car collects, and those who analyze and utilize such data for commercial gains.

    The fierce competition between Tesla and Waymo is worth watching in this regard, largely because the profit incentives of the capitalist marketplace will almost assuredly lead both of these companies to throw caution to the wind in their race to beat one another via self-driving cars. We will only be able to solve the data management challenge issued by autonomous vehicles if we learn that sometimes competition needs to be put aside in the name of cooperation that can solve public health crises like deaths resulting from self-driving vehicles.

    The data management challenge posed by self-driving cars demands that that auto and insurance industries also take ethics into consideration to a hitherto undreamt-of extent. Modern vehicles are becoming surveillance hubs in and of themselves, with Tesla’s newest non-lidar approach to self-driving car data collection proving to be more accurate, and thus necessarily more invasive, than nearly any other technique that’s yet been pioneered. While this may help Tesla in the sense that it’s propelling the company ahead of its adversaries technologically speaking, it poses immense ethical questions like what the responsibility of the market leader is when it comes to fostering innovations which necessarily surveil the public in order to function.

    It’s a self-driving world now

    The data management challenges being generated by the ceaseless advance of self-driving vehicles won’t go away anytime soon, as we’re now in a self-driving world where automation, data collection (another term for surveillance), and programmatic decision-making is the new standard. While we’ve grown so used to always being the one doing the driving, humans are now being put in the backseat and must trust in the capacity of machines to deliver us to a brighter future. In order to arrive at our destination unimpeded, we need a new focus on ethics across the automotive and insurance industries that will ensure this new technology is primarily used for good.

    Additional regulation will be needed in order to protect the privacy of everyday people, and modern infrastructure must be constructed in order to alleviate the sensory-burden being placed on autonomous vehicles if they’re to succeed in the long-term. The good news for those who love self-driving cars is that the profit incentive is enough to make companies plow ahead regardless of the data management challenges they’re facing. This could result in huge ethical dilemmas later on, though, so those interested in self-driving cars can’t allow humans to become unmoored from the driver’s seat if we want our values to be represented on the roads of tomorrow.

    Author: Steve Jones

    Source: SmartDataCollective

  • The persuasive power of data and the importance of data integrity

    The persuasive power of data and the importance of data integrity

    Data is like statistics: a matter of interpretation. The process may look scientific, but that does not mean the result is credible or reliable.

    • How can we trust what a person says if we deny the legitimacy of what he believes?
    • How can we know a theory is right if its rationale is wrong?
    • How can we prove an assertion is sound if its basis is not only unsound but unjust?

    To ask questions like these is to remember that data is neutral, it is an abstraction, whose application is more vulnerable to nefarious ends than noble deeds; that human nature is replete with examples of discrimination, tribalism, bias, and groupthink; that it is not unnatural for confirmation bias to prevail at the expense of logic; that all humanity is subject to instances of pride, envy, fear, and illogic.

    What we should fear is not data, but ourselves. We should fear the misuse of data to damn a person or ruin a group of people. We should fear our failure to heed Richard Feynman’s first principle about not fooling ourselves. We should fear, in short, the corruption of data; the contemptible abuse of data by all manner of people, who give pseudoscience the veneer of respectability.

    Nowhere is the possibility of abuse more destructive, nowhere is the potential for abuse more deadly, nowhere is the possible, deliberate misreading of data more probable than in our judicial system.

    I write these words from experience, as both a scientist by training and an expert witness by way of my testimony in civil trials.

    What I know is this: Data has the power to persuade.

    People who use data, namely lawyers, have the power to persuade; they have the power to enter data into the record, arguing that what is on the record, that what a stenographer records in a transcript, that what jurors read from the record is dispositive.

    According to Wayne R. Cohen, a professor at The George Washington University School of Law and a Washington, DC injury claims attorney, data depends on context.

    Which is to say data is the product of the way people gather, interpret, and apply it.

    Unless a witness volunteers information, or divulges it during cross-examination, a jury may not know what that witness’s data excludes: exculpatory evidence, acts of omission, that reveals the accused is not guilty, that the case against the accused lacks sufficient proof, that the case sows doubt instead of stamping it out.

    That scenario should compel us to be more scrupulous about data.

    That scenario should compel us to check (and double-check) data, not because we should refuse to accept data, but because we must not accept what we refuse to check.

    That scenario summons us to learn more about data, so we may not have to risk everything, so we may not have to jeopardize our judgment, by speculating about what may be in lieu of what is.

    That scenario is why we must be vigilant about the integrity of data, making it unimpeachable and unassailable.

    May that scenario influence our actions.

    Author: Michael Shaw

    Source: Dataversity

  • Verschillende perspectieven om te kijken naar de transitie richting de cloud

    Verschillende perspectieven om te kijken naar de transitie richting de cloud

    Onlangs belegde CIO samen met Juniper Networks een bijeenkomst in de oude verkeerstoren in Schiphol-Oost. Samen met zeven genodigden bespraken ze, heel toepasselijk in deze omgeving, de gang naar de cloud. Daarover verschillen de meningen nogal, zo bleek.

    We voeren de discussie aan de hand van drie stellingen. De eerste heeft te maken met puur de connectiviteit van een netwerk, de tweede met de security en de derde met de automatisering. Op deze manier beginnen we met de basis, de connectiviteit, waarna we kijken hoe dit beveiligd moet worden en tot slot hoe connectiviteit en security kunnen worden geautomatiseerd.

    Stelling 1: Een netwerk is een netwerk, ongeacht het deployment model

    Deze eerste stelling bespreken we met Naomi Du Burck, manager front office IT Operations bij de ANWB in Den Haag en met Peter Verdiesen, als hoofd ICT werkzaam bij Countus accountants en adviseurs uit Zwolle. We treffen hierbij meteen twee mensen uit bedrijfstakken waar men bij het horen van public cloud vooral uitdagingen ziet. Het maakt voor beide organisaties dus nog wel degelijk uit hoe het netwerk eruitziet.

    De AVG is voor beide gesprekspartners zonder twijfel de meest voor de hand liggende rode vlag als het gaat om de cloud. Zowel de ANWB als Countus hebben te maken met veel data van leden/cliënten. 'Dat maakt cloud in het algemeen tamelijk ingewikkeld', zijn beiden het eens. Ze mogen hun data niet zomaar in de cloud zetten. De ANWB heeft daarnaast veel legacy. 'Dan valt de optie voor public in de huidige situatie eigenlijk af', aldus Du Burck.

    Het netwerk en de infrastructuur als geheel is voor zowel de ANWB als Countus ondergeschikt aan hoe men om mag gaan met de data van gebruikers. Dit geldt ook voor de telemetrie en analytics die je uit het netwerk kunt halen als je de zaken continu monitort. Verdiesen: 'We willen hier wel graag bedrijfsbreed mee aan de slag, maar mogen dit niet zonder extra AVG-maatregelen die het veel complexer maken'. Hij ziet namelijk wel degelijk dat hier zeer waardevolle inzichten uit gehaald kunnen worden.

    Het is overigens niet zo dat analytics per definitie niet mag: 'Een op een mag het wel, tussen een enkele klant en Countus, maar opschalen mag niet zonder toestemming van alle betrokkenen', aldus Verdiesen. Tot de wetgeving aangepast wordt, zitten ze hieraan vast, is zijn conclusie.

    Ook al durft Du Burck te stellen dat 'de ANWB nooit volledig public gaat', wordt er natuurlijk wel degelijk naar een hybride vorm gekeken. Denk hierbij aan het verzorgen van de werkplekken voor werknemers. Office 365, maar ook het op afstand gebruik kunnen maken van de omgeving van de ANWB is iets waar men mee bezig is. Men moet immers wel mee in de ontwikkelingen rondom een moderne werkplek.

    Tot slot geeft Du Burck aan dat er bij de overgang naar een public cloud nog veel meer zaken onderzocht, afgesproken en gewijzigd moeten worden voordat dit gerealiseerd kan worden. Denk bijvoorbeeld maar aan wijzigingen in governance, beheer, beleid, budgetten en ga zo maar door.

    Stelling 2: Private betekent meer controle op het gebied van privacy en compliancy

    De tweede stelling bespreken we met Erik van der Saag, sectormanager ICT bij de Tabijn scholengroep en met Duncan Megens, evenals Du Burck werkzaam bij de ANWB, maar dan als manager backend - IT operations. We hebben hier twee radicaal van elkaar verschillende organisaties aan tafel. Bij Tabijn zit men volgens Van der Saag al voor 80% in de cloud, terwijl dat bij de ANWB nog geen 10% is.

    We raken al snel verzeild in een welhaast filosofische discussie over wat we verstaan onder controle. Megens: 'Wat bedoel je met controle? Dat kan ik namelijk op meerdere manieren interpreteren. Gaat het dan om de theoretische controle die je hebt over je netwerk, of over hoe goed je het daadwerkelijk onder controle hebt? Dat is namelijk nogal een verschil, ook voor de business als geheel'.

    Als voorbeeld pakken we Office 365. Bij Tabijn is men bezig om de overstap te maken en ook bij de ANWB zijn de voorbereidingen voor de overgang in gang gezet. 'Je geeft wel degelijk controle op als je naar Office 365 gaat', volgens Van der Saag, iets waar Megens het roerend mee eens is. Toch heb je er gezien het cloud-karakter van de dienst juist wel weer controle over als je kijkt naar patching en dergelijke, afhankelijk van hoe je het inricht.

    Onder de streep heeft het volgens beide heren weinig zin om het over controle te hebben als je het niet hebt over hoe iets ingericht is. Van der Saag geeft als voorbeeld het exporteren van gegevens van leerlingen naar de cloud. Dat doet Tabijn zeker, maar er zijn hier wel voorwaarden waaraan men zich moet houden: 'Gegevens van leerlingen mogen niet zomaar meer geëxporteerd worden. Daar moet weer een laag tussen die ervoor zorgt dat de data ook veilig zijn'. Uiteindelijk heeft Tabijn hier in de cloud net zoveel controle op het gebied van privacy en compliancy als het on-premise zou hebben.

    De conclusie van deze discussie is dan ook dat de stelling niet per definitie waar is. Als je het netwerk en de infrastructuur goed inricht, maakt het niet uit waar je data staan en waar je applicaties draaien. Dit is uiteindelijk ook een kwestie van vertrouwen. Vaak is er nog altijd het gevoel dat data minder veilig zijn buiten de muren van je eigen omgeving, maar dat hoeft niet per sé zo te zijn. Dit zal ongetwijfeld ook te maken hebben met een generatiekloof, dus op termijn zal het gevoelsargument minder vaak gemaakt worden.

    Stelling 3: Ontzorgen doe je in de cloud

    Voor de derde en laatste stelling schuiven we aan bij Daniel Treep, architect bij KPN en bij Martijn Jonker van Andarr Technology. KPN zal weinig introductie behoeven, Andarr is een bedrijf dat naar eigen zeggen 'niet voor watjes' is en biedt ICT-consultancy en detacheringsdiensten aan organisaties.

    We zijn het er aan tafel vrij snel over eens dat ontzorgen wellicht niet de best gekozen term is. Als je vanuit private een transitie gaat maken, dan hebben de meeste klanten juist het gevoel dat er juist complexiteit toegevoegd wordt, zeker als het gaat om het netwerk. Je moet ineens allerlei verschillende platformen in een netwerkarchitectuur zien te gieten. Je kunt dan moeilijk zeggen dat je ontzorgd bent, eerder het tegenovergestelde. Je zou dit overigens ook kunnen zien als een transitiefase, waar je even doorheen moet. Dat menen we in ieder geval te proeven uit de opmerking van Jonker dat 'het aanbieden van alles in de cloud de ultieme droom is qua volwassenheid'.

    Volgens Treep maakt het nogal iets uit wat je afneemt in de cloud. Bij SaaS neem je simpelweg een totale dienst af, die jou als het goed is ontzorgt. Daar wil Jonker overigens wel meteen een kanttekening bij plaatsen, want er is maar weinig vastgelegd over hoe services aangeboden moeten worden in de cloud. 'Als het goed is, wordt een dienst zo ingericht dat je er bijvoorbeeld niet zonder wachtwoord bij kan, maar er is geen enkele verplichting om dat ook te doen'. De zorgen kunnen dus niet volledig het raam uit bij het afnemen van een SaaS, volgens Jonker.

    In tegenstelling tot SaaS, heb je volgens Treep bij PaaS en IaaS nagenoeg dezelfde zorgen als bij andere deployment-modellen. Daar is Jonker het mee eens: 'een programmeur kan ermee doen wat hij wil en een enorm datalake creëren waar je geen overzicht meer over hebt'.

    Volgens Treep is het onder de streep eenvoudig als het gaat om het uit handen nemen van zaken in de cloud. 'Hoeveel controle krijg je over het platform? Daar draait het uiteindelijk om'. Heb je veel controle, dan kun je er ook voor zorgen dat je het zodanig inricht dat je er weinig zorgen over hebt. Automation speelt hierin een duidelijke rol: 'Automation is de basis van welke cloud-benadering dan ook'.

    Automation en ontzorging hebben als zodanig het nodige met elkaar te maken, dus in die zin zou je kunnen zeggen dat de stelling conceptueel hout snijdt, ook al maken beide heren de nodige kanttekeningen.

    Interpreteer je ontzorging zo dat al je zorgen voorbij zijn als IT manager, dan kom je toch van een koude kermis thuis, denken beide heren. 'Je ruilt je huidige zorgen in voor andere zorgen in de cloud', is de duidelijke conclusie.

    Conclusie: Verschillende tempo's en einddoelen

    Discussies zoals we die hierboven hebben beschreven, tussen managers van uiteenlopende organisaties, leveren altijd een mooie dwarsdoorsnede van de markt op. Als het gaat om de gang naar de cloud, is het duidelijk dat niet iedere organisatie even snel gaat, maar ook dat niet iedere organisatie hetzelfde einddoel heeft of zou moeten hebben.

    Ben je een organisatie waarbinnen men vanuit de infrastructuur en dus ook het netwerk denkt bij het denken over veranderingen, dan ben je veel eerder geneigd om positief te zijn over de stellingen. Je denkt dan kort door de bocht dat het middels het juist inregelen van de verschillende interfaces prima mogelijk moet zijn om een groot logisch netwerk te maken, waarbinnen je controle hebt en alles kunt automatiseren.

    Ben je als organisatie vooral druk met persoonlijke data en zijn je applicaties veel belangrijker dan je infrastructuur en je netwerk, dan zal je minder positief zijn. Dat is ook niet meer dan logisch, omdat het dan geen technische exercitie is. De AVG gaat bijvoorbeeld niet of nauwelijks over technologie.

    Ook bij dit type organisatie kan wel degelijk de gang naar de cloud ondernomen worden, maar daarvoor moeten er dan veel meer extra maatregelen genomen worden. Bij een scholengemeenschap zoals Tabijn is dat bijvoorbeeld iets overzichtelijker dan het bij de ANWB is, om maar een dwarsstraat te noemen.

    Wel is het wat ons betreft zo dat je je af kunt vragen of iedere organisatie de 'ultieme droom van volwassenheid' waar Martijn Jonker van Andarr het over had moet willen nastreven. In sommige gevallen zal dit een droom blijven of altijd als een nachtmerrie worden gezien. Laten we verder hopen dat de prestaties tijdens het vliegen in de flight simulator voor sommigen geen voorbode zijn van hoe de transitie naar de cloud zal uitpakken.

    Auteur: Sander Almekinders

    Bron: CIO

  • Welke aanpassingen vraagt een toekomst met 5G internet?

    Welke aanpassingen vraagt een toekomst met 5G internet?

    Het snelle vijfde generatie mobiele internet (5G) is over een aantal jaar realiteit. Het belooft snellere download- en uploadsnelheden, meer capaciteit en stabielere verbindingen. Hoewel velen de voordelen zien, moeten we de maatschappelijke transformatie die daarmee gepaard gaat niet onderschatten.

    Naast de impact die 5G voor bedrijven zal hebben, zal de hele maatschappij zich moeten aasnpassen. Bij 5G en de toepassingen die we hierbij voor ogen hebben is het niet eens de snelheid die cruciaal is, maar de betrouwbaarheid en de consistentie van de verbinding. Dit vraagt het een en ander aan infrastructuur. Met meer dan 75 miljard apparaten die wereldwijd aan het internet verbonden zijn in 2025, neemt de hoeveelheid data gigantisch toe, en de benodigde capaciteit dus ook. De implicaties van het nieuwe netwerk zijn groter dan op het eerste oog zichtbaar is. Wat brengt 5G nog meer met zich mee?

    1. Organische bediening

    We gebruiken onze smartphone om te sporten, lezen, muziek te luisteren etc. Maar we verbeteren er ook onze gezondheid mee. Wat echter niet gezond is, is dat we gerust uren per dag naar een scherm staren. De apps om een tijdslimiet in te stellen vinden dan ook gretig aftrek, de liefde voor het oneindig scrollen begint te bekoelen. 5G is hierbij een welkome innovatie. Door de vermindering in latency, en dus de verbetering van de reactiesnelheid, krijgt onze duim meer rust: we bedienen onze telefoon meer met stem en gebaren.

    Met een volledig naadloos en onzichtbaar netwerk dat alle apparaten draadloos met elkaar verbindt, worden gegevens met hoge snelheden overgedragen en opgeslagen. Zo blijft de techniek ons nog steeds ondersteunen in praktisch alles wat we doen, maar in een meer natuurlijke vorm. De smartphone zal zeker blijven, maar waarschijnlijk op een meer organische, onzichtbare manier.

    2. Meer data, meer datacenters

    Wanneer we massaal overstappen op 5G, stelt dat ook bepaalde eisen aan de infrastructuur van het netwerk. Een 5G verbinding kan data bijna duizend keer sneller verplaatsen dan het glasvezelnetwerk van nu. En met de verwachte hoeveelheid verbonden apparaten die elk jaar blijft groeien, is er binnen korte tijd aanzienlijk meer data in omloop. Om deze gegevens met hogere snelheden betrouwbaar te verzenden, is flink meer capaciteit nodig.

    Vergelijk de verwerking van deze data bijvoorbeeld eens met watertoevoer. Wanneer je vijftig liter water nodig hebt, bedenk je je eerst hoe ver de waterbron is. Daarna stel je de vragen; hoe dik is de slang en hoeveel constante druk wordt er uitgeoefend om het water eruit te pompen? Zo kun je data ook bekijken. Voldoet het huidige netwerk nog wel aan de benodigde hoeveelheden data en de druk die daarvoor nodig is?

    De huidige en toekomstige netwerkinfrastructuur moet evolueren om 5G de ondersteuning te kunnen bieden die het nodig heeft: betrouwbaar, consistent en snel. Het antwoord hierop ligt deels in edge computing; met micro datacenters kan een deel van de druk van de ketel worden gehaald. Hiermee verplaats je in feite de verwerking van data deels richting het eindstation, wat ruis en vertraging vermindert. Hoewel edge computing niet voor elk doel geschikt is, kan het zeker bijdragen aan de verdeling van de lasten. Vooral in een wereld vol IoT (Internet of Things).

    3. Privatisering van de netwerken

    Wanneer het over 5G en alle bijbehorende mogelijkheden gaat, worden vaak de meest extreme voorbeelden genoemd, zoals de zelfrijdende auto. Maar denk ook eens aan de chirurg die op afstand een operatie uitvoert met een robotarm, of de brandweerman die branden bestrijdt met behulp van een supersnelle, real-time internetverbinding. Die laatste is essentieel om met de juiste snelheid te reageren en anticiperen.

    Bij zulke gevallen, waarin het letterlijk om een kwestie van leven of dood kan gaan, is het voorkomen van jitter (ruis) cruciaal. Wanneer de verbinding ook maar een seconde hapert, kan dat voor de patiënt, tegenligger of het slachtoffer te laat zijn. Wat hier nodig is, is een gesloten, speciaal voor zulke doeleinden ingericht netwerk, ver verwijderd van de storingen vanuit andere applicaties.

    Om dit te realiseren is mogelijk zelfs nieuwe regelgeving vereist, waarin het onderscheid moet worden gemaakt tussen technologie die invloed heeft op gezondheid en veiligheid en technologie die er puur voor entertainment is. De risico's zijn nou eenmaal niet hetzelfde.

    Auteur: Petran van Hugten

    Bron: CIO

  • Why the right data input is key: A Machine Learning example

    Why the right data input is key: A Machine Learning example

    Finding the ‘sweet spot’ of data needs and consumption is critical to a business. Without enough, the business model under performs. Too much and you run the risk of compromised security and protection. Measuring what data intake is needed, like a balanced diet, is key to optimum performance and output. A healthy diet of data will set a company on the road to maximum results without drifting into red areas either side. 

    Machine learning is not black magic. A simple definition is the application of learning algorithms to data to uncover useful aspects of the input. There are clearly two parts to this process, though: the algorithms themselves and the data being processed and fed in.

    The algorithms are vital, and continually tuning and improving them makes significant difference to the success of the solutions. However, these are just mathematical experiments on the data. The pivotal bit is the data itself. Quite simply, the algorithms cannot work well on poor data volume, and a deficit of data leaves the system undernourished and, ultimately, the system hungering for more. With more data to consume, the system can be trained more fully and the outcomes are stronger.

    Without question, there is a big need for an ample amount of data to offer the system a healthy helping to configure the best outcomes. What is crucial, though, is that the data collected is representative of the tasks you intend to perform.

    Within speech recognition, for example, this means that you might be interested in any or all of the following attributes:


    • formal speech/informal speech
    • prepared speech/unprepared speech
    • trained speakers/untrained speakers
    • presenter/conversational
    • general speech/specific speech
    • accents/dialects


    • noisy/quiet
    • professional recording/amateur recording
    • broadcast/telephony
    • controlled/uncontrolled

    In reality, all of these attributes impact the ability to perform the tasks required of speech recognition with ultimate accuracy. Therefore, the data needed to tick all the boxes is different and involves varying degrees of difficulty to obtain. Bear in mind that it is not just the audio that is needed, accurate transcripts are required to perform training. That probably means that most data will need to be listened to by humans to transcribe or validate the data, and that can create an issue of security.

    An automatic speech recognition (ASR) system operates in two modes: training and operating.


    Training is most likely managed by the AI/ML company providing the service, which means the company needs access to large amounts of relevant data. In some cases, this is readily available in the public domain anyway. For example, content that has already been broadcast on television or radio and therefore has no associated privacy issues. But this sort of content cannot help with many of the other scenarios in which ASR technology can be used, such as phone call transcription, which has many different translation characteristics. Obtaining this sort of data can be tied up with contracts for data ownership, privacy and usage restrictions.


    In operational use, there is no need to collect audio. You just use the models that have previously been trained. But the obvious temptation is to capture the operational data and use it. However, as mentioned, this is where the challenge begins: ownership of the data. Many cloud solution providers want to use the data openly, as it will enable continuous improvement for the required use cases. Data ownership becomes the lynchpin.

    The challenge is to be able to build great models that work really well in any scenario without capturing privately-owned data. A balance between quality and security must be struck. This trade-off happens in many computer systems but somehow data involving people’s voices often, understandably, generates a great deal of concern.

    Finding a solution

    To ultimately satiate an ASR system, there needs to be just enough data provided to execute the training so good systems can be built. There is an option for companies to train their own models, which enables them to maintain ownership of the data. This can often require a complex professional services agreement, requiring a good investment of time, but it can provide a solution at a reasonable cost very quickly.

    ML algorithms are in a constant state of evolution, and techniques can now be used that allow smaller data sets to be used to bias systems already trained on big data. In some cases, smaller amounts of data can achieve ‘good enough’ accuracy. The overall issue of data acquisition is not removed, but sometimes less data can provide solutions.

    Finding a balanced data diet by enabling better algorithm tuning, and filtering and selection of data, can get the best results without collecting everything that has ever been said. More effort may be needed to achieve the best equilibrium. And, without doubt, the industry must maintain its search for ways to make the technology work better without people’s privacy being compromised.

    Author: Ian Firth

    Source: Insidebigdata

EasyTagCloud v2.8