2 items tagged "ethics"

  • Ethical insights on the future of Generation AI

    Ethical insights on the future of Generation AI

    Millions of our youngest people, children under 10 years old, are exposed to misleading and biased uses of artificial intelligence. What will you do about it?

    We are a digital society in transition: children born since 2010 (known as Generation AI or Gen AI) are being shaped by the ubiquitous presence of artificial intelligence (AI) from birth. Already some of the 650 million members of Gen AI, ages 5 through 9, are exposed to AI used in schools and by marketers.

    Young children adopt technology easily. While the numbers vary by country and economic stratum, children as young as 2 years old use tablets and play with smartphones. Indeed, a 2015 study conducted at the University of Iowa found that, by age 2, 90% of children have a moderate ability to use a tablet. Toddlers who are just learning to walk know how to swipe and point and generate results from tablets, even though some games are simply too complex for their age.

    A review of studies published in Child Development Perspectives in 2018 found that toddlers may learn more from interactive digital media that employ machine learning than from educational TV and videos. The question for you and me is: 'Who decides what is important for the Gen AI child to know, how is their attention guided to that content and by what algorithms?'

    Bob Hetu, Kelly Calhoun Williams and I recently published a piece of 'Maverick' Gartner research, presenting our findings that the infusion of AI into the world of our youngest generation, the first 'AI natives', will produce dystopian results. Gen AI’s critical thinking skills, that is, the ability to independently analyze, assess and reconstruct ideas, have not yet fully developed, making them unable to discern if a marketing message is partisan or discriminatory. Yet businesses and educational technology providers are incorporating AI into products targeted at or used by Gen AI children.

    As a result, Gen AI is exposed to unscrupulous (as well as well-meaning but misguided) uses of AI in their young lives. Young enough to be influenced by AI, they are also too young to do anything about it.

    Moreover, AI-driven content intended for adults is also exposed to children. Adult-oriented content will affect Gen AI, reinforcing and extending the impact that AI has on the children. And while the current exposure to AI is largely limited to marketing messages and interactive responses based on machine learning techniques, its use in Gen AI’s lives will expand exponentially over time.

    If this exposure is left unchecked, Gen AI children will consider it as a normal part of being a consumer, student and citizen, leaving themselves open to malicious manipulation. Society must ensure Gen AI learns how to differentiate and evaluate the AI-based content. Children must learn and practice critical thinking and evaluative skills early and often.

    You and I know children, in our families, neighborhood, or community, who are members of Generation AI. And as members of society, we have a responsibility to moderate the influence AI has on society’s youngest and most vulnerable members. We need to support them until the children have developed the critical thinking skills necessary to discern AI’s influence and have a healthy skepticism about what they see and do.

    And as people who are in business, education or other professions, you and I must also advocate for neutral AI results in products, marketing messages and educational tools. The decisions about the use of AI in education, commerce and other social policy areas will determine how Gen AI is raised into adulthood.

    What will be the impact of lifelong exposure to AI on customers and workers? Will our children grow into independent, critical thinkers or dependent, unthinking individuals passively influenced and easily shepherded? An entire generation awaits our response.

    Author: Pete Basiliere

    Source: Gartner

  • Should we care more about ethics in a data science environment?

    Should we care more about ethics in a data science environment?

    The big idea

    Undergraduate training for data scientists - dubbed the sexiest job of the 21st century by Harvard Business Review - falls short in preparing students for the ethical use of data science, our new study found.

    Data science lies at the nexus of statistics and computer science applied to a particular field such as astronomy, linguistics, medicine, psychology or sociology. The idea behind this data crunching is to use big data to address otherwise unsolvable problems, such as how health care providers can create personalized medicine based on a patient’s genes and how businesses can make purchase predictions based on customers’ behavior.

    The U.S. Bureau of Labor Statistics projects a 15% growth in data science careers over the period of 2019-2029, corresponding with an increased demand for data science training. Universities and colleges have responded to the demand by creating new programs or revamping existing ones. The number of undergraduate data science programs in the U.S. jumped from 13 in 2014 to at least 50 as of September 2020.

    As educators and practitioners in data science, we were prompted by the growth in programs to investigate what is covered, and what is not covered, in data science undergraduate education.

    In our study, we compared undergraduate data science curricula with the expectations for undergraduate data science training put forth by the National Academies of Sciences, Engineering and Medicine. Those expectations include training in ethics. We found most programs dedicated considerable coursework to mathematics, statistics and computer science, but little training in ethical considerations such as privacy and systemic bias. Only 50% of the degree programs we investigated required any coursework in ethics.

    Why it matters

    As with any powerful tool, the responsible application of data science requires training in how to use data science and to understand its impacts. Our results align with prior work that found little attention is paid to ethics in data science degree programs. This suggests that undergraduate data science degree programs may produce a workforce without the training and judgment to apply data science methods responsibly.

    It isn’t hard to find examples of irresponsible use of data science. For instance, policing models that have a built-in data bias can lead to an elevated police presence in historically over-policed neighborhoods. In another example, algorithms used by the U.S. health care system are biased in a way that causes Black patients to receive less care than white patients with similar needs.

    We believe explicit training in ethical practices would better prepare a socially responsible data science workforce.

    What still isn’t known

    While data science is a relatively new field – still being defined as a discipline – guidelines exist for training undergraduate students in data science. These guidelines prompt the question: How much training can we expect in an undergraduate degree?

    The National Academies recommend training in 10 areas, including ethical problem solving, communication and data management.

    Our work focused on undergraduate data science degrees at schools classified as R1, meaning they engage in high levels of research activity. Further research could examine the amount of training and preparation in various aspects of data science at the Masters and Ph.D. levels and the nature of undergraduate data science training at schools of different research levels.

    Given that many data science programs are new, there is considerable opportunity to compare the training that students receive with the expectations of employers.

    What’s next

    We plan to expand on our findings by investigating the pressures that might be driving curriculum development for degrees in other disciplines that are seeing similar job market growth.

    Source: The Conversation

EasyTagCloud v2.8