2 items tagged "Ethical intelligence"

  • Ethical Intelligence: can businesses take the responsibilty?

    Ethical Intelligence: can businesses take the responsibilty?

    Adding property rights to inherent human data could provide a significant opportunity and differentiator for companies seeking to get ahead of the data ethics crisis and adopt good business ethics around consumer data.

    The ability for a business to operate based on some amount of intelligence is not new. Even before business owners used manual techniques such as writing customer orders in a book or using calculators to help forecast how many pounds of potatoes might be needed to stock up for next week's sales, there were forms of "insight searching." Enterprises are always looking for operational efficiencies, and today they are gathering more intelligence exponentially.

    A significant part of business intelligence is understanding customers. The more data a company has about its current or prospective customers' wants, likes, dislikes, behaviors, activities, and lifestyle, the more intelligence that business can generate. In principle, more data suggests the possibility of more intelligence.

    The question is: are most businesses and their employees prepared to be highly intelligent? If a company were to reach a state where it has significant intelligence about its customers, could it resist the urge to manipulate them?

    Suppose a social media site uses data about past activities to conclude that a 14-year-old boy is attracted to other teenage boys. Before he discovers where he might be on the gay/straight spectrum, could t social media executives, employees, and/or algorithms resist the urge to target him with content tagged for members of the LGBTQ community? If they knowingly or unknowingly target him with LGBTQ-relevant content before the child discovers who he might be, is that behavior considered ethical?

    Looking for best practices

    Are businesses prepared to be responsible with significant intelligence, and are there best practices that would give a really intelligent business an ethical compass?

    The answer is maybe, leaning toward no.

    Business ethics is not something new either. Much like business intelligence, it evolved over time. What is new though, is that ethics no longer only have to be embedded into humans that make business decisions. It must also be embedded in automated systems that make business decisions. The former, although imperfect, is conceivable. You might be able to hire ethical people or build a culture of ethics in people. The latter is more difficult. Building ethics into systems is neither art nor science. It is a confluence of raw materials, many of which we humans still don't fully understand.

    Business ethics has two components. One is the aforementioned ethics in systems (sometimes called AI ethics) that is primarily focused on the design of algorithms. The other component of business ethics is data ethics, which can be measured from two dimensions: the algorithm and the raw material that goes into the algorithm (that is, the data).

    AI ethics is complex, but it is being studied. At the core of the complexity are human programmers who are usually biased and can have varying ethical frameworks and customs. They may create potentially biased or unethical algorithms.

    Data ethics is not as complex but is not widely studied. It covers areas such as consent for the possession of data, authorization for the use of data, the terms under which an enterprise is permitted to possess and use data, whether the value created from data should be shared with the data's source (such as a human), and how permission is secured to share insights derived from data.

    Another area of data ethics is whether the entire data set is representative of society. For example, is an algorithm determining how to spot good resumes being trained with 80 percent resumes from men and just 20 percent from women?

    These are large social, economic, and historical constructs to sort out. As companies become exponentially more intelligent, the need for business ethics will increase likewise. As a starting point, corporations and executives should consider consent for and authorization of data used in business intelligence. Was the data collected with proper consent? Meaning: does the user really know that their data is being monetized or was it hidden in a long terms and conditions agreement? What were the terms and conditions? Was the data donated, was it leased, or was it "sort of lifted" from the user?

    Many questions, limited answers.

    The property rights model

    Silicon Valley is currently burning in a data ethics crisis. At the core is a growing social divide about data ownership between consumers, communities, corporations, and countries. We tend to anticipate that new problems need new solutions. In reality, sometimes the best solution is to take something we already know and understand and retrofit it into something new.

    One emerging construct uses a familiar legal and commercial framework to enable consumers and corporations to find agreement around the many unanswered questions of data ownership. This construct uses the legal and commercial framework of property as a set of agreements to bridge the growing divide between consumers and corporations on the issues of data ownership, use, and consideration for value derived from data.

    If consumer data is treated as personal property, consumers and enterprises can reach agreement using well-understood and accepted practices such as a title of ownership for one's data, track and trace of data as property, leasing of the data as property, protection from theft, taxation of income created from said data, tax write-offs for donating the data, and the ability to include data property as part of one's estate.

    For corporations and executives, with increasing business intelligence comes increasing business ethics responsibilities.

    What is your strategy?

    Author: Richie Etwaru

    Source: TDWI

  • How to use AI image recognition responsibly?

    How to use AI image recognition responsibly?

    The use of artificial intelligence (AI) for image recognition offers great potential for business transformation and problem-solving. But numerous responsibilities are interwoven with that potential. Predominant among them is the need to understand how the underlying technologies work, and the safety and ethical considerations required to guide their use.

    Regulations Coming for image, face, and voice recognition?

    Today, governance regulations have sprung up worldwide that dictate how an individual’s personal information is held, used and who owns it. General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are examples of regulations designed to address data and security challenges faced by consumers and the businesses that possess their associated data. If laws now apply to personal data information, can regulations governing image and facial recognition (technology that can identify a person’s face and voice, the most personal 'information' we possess) be far behind? Further regulations are likely coming, but organizations shouldn’t wait to plan and direct their utilization. Businesses need to follow how this technology is being both used and misused, and then proactively apply guidelines that govern how to use it effectively, safely, and ethically.

    The use and misuse of technology

    Many organizations use recognition capabilities in helpful and transformative ways. Medical imaging is a prime example. Through machine learning, predictive algorithms come to recognize tumors more accurately and faster than human doctors can. Autonomous vehicles use image recognition to detect road signs, traffic signals, other traffic, and pedestrians. For industrial manufacturers and utilities, machines have learned how to recognize defects in things like power lines, wind turbines, and offshore oil rigs through the use of drones. This ability removes humans from what can sometimes be dangerous environments, improving safety, enabling preventive maintenance, and increasing frequency and thoroughness of inspections. In the insurance field, machine learning helps process claims for auto and property damage after catastrophic events, which improves accuracy and limits the need for humans to put themselves in potentially unsafe conditions.

    Just as most technologies can be used for good, there are always those who seek to use them intentionally for ignoble or even criminal reasons. The most obvious example of the misuse of image recognition is deepfake video or audio. Deepfake video and audio use AI to create misleading content or alter existing content to try to pass off something as genuine that never occurred. An example is inserting a celebrity’s face onto another person’s body to create a pornographic video. Another example is using a politician’s voice to create a fake audio recording that seems to have the politician saying something they never actually said.

    In-between intentional beneficial use and intentional harmful use, there are gray areas and unintended consequences. If an autonomous vehicle company used only one country’s road signs as the data to teach the vehicle what to look for, the results might be disastrous if the technology is used in another country where the signs are different. Also, governments use cameras to capture on-street activity. Ostensibly, the goal is to improve citizen safety by building a database of people and identities. What are the implications for a free society that now seems to be under public surveillance? How does that change expectations of privacy? What happens if that data is hacked?

    Why take proactive measures?

    Governments and corporate governance bodies likely will create guidelines and laws that apply to these types of tools. There are a number of reasons why businesses should proactively plan for how they create and use these tools now before these laws to come into effect.

    Physical safety is a prime concern. If an organization creates or uses these tools in an unsafe way, people could be harmed. Setting up safety standards and guidelines protects people and also protects the business from legal action that may result from carelessness.

    Customers demand accountability from companies that use these technologies. They expect their personal data to be protected, and that expectation will extend to their image and voice information as well. Transparency helps create trust and that trust will be necessary for any business to succeed in the field of image recognition.

    Putting safety and ethics guidelines in place now, including establishing best practices such as model audits and model interpretability, may also give a business a competitive advantage by the time laws governing these tools are passed. Other organizations will be playing catch-up while those who have planned ahead gain market share over their competitors.

    Author: Bethann Noble

    Source: Cloudera

EasyTagCloud v2.8