11 items tagged "generative AI"

  • A Closer Look at Generative AI

    A Closer Look at Generative AI

    Artificial intelligence is already designing microchips and sending us spam, so what's next? Here's how generative AI really works and what to expect now that it's here.

    Generative AI is an umbrella term for any kind of automated process that uses algorithms to produce, manipulate, or synthesize data, often in the form of images or human-readable text. It's called generative because the AI creates something that didn't previously exist. That's what makes it different from discriminative AI, which draws distinctions between different kinds of input. To say it differently, discriminative AI tries to answer a question like "Is this image a drawing of a rabbit or a lion?" whereas generative AI responds to prompts like "Draw me a picture of a lion and a rabbit sitting next to each other."

    This article introduces you to generative AI and its uses with popular models like ChatGPT and DALL-E. We'll also consider the limitations of the technology, including why "too many fingers" has become a dead giveaway for artificially generated art.

    The emergence of generative AI

    Generative AI has been around for years, arguably since ELIZA, a chatbot that simulates talking to a therapist, was developed at MIT in 1966. But years of work on AI and machine learning have recently come to fruition with the release of new generative AI systems. You've almost certainly heard about ChatGPT, a text-based AI chatbot that produces remarkably human-like prose. DALL-E and Stable Diffusion have also drawn attention for their ability to create vibrant and realistic images based on text prompts. We often refer to these systems and others like them as models because they represent an attempt to simulate or model some aspect of the real world based on a subset (sometimes a very large one) of information about it.

    Output from these systems is so uncanny that it has many people asking philosophical questions about the nature of consciousness—and worrying about the economic impact of generative AI on human jobs. But while all these artificial intelligence creations are undeniably big news, there is arguably less going on beneath the surface than some may assume. We'll get to some of those big-picture questions in a moment. First, let's look at what's going on under the hood of models like ChatGPT and DALL-E.

    How does generative AI work?

    Generative AI uses machine learning to process a huge amount of visual or textual data, much of which is scraped from the internet, and then determine what things are most likely to appear near other things. Much of the programming work of generative AI goes into creating algorithms that can distinguish the "things" of interest to the AI's creators—words and sentences in the case of chatbots like ChatGPT, or visual elements for DALL-E. But fundamentally, generative AI creates its output by assessing an enormous corpus of data on which it’s been trained, then responding to prompts with something that falls within the realm of probability as determined by that corpus.

    Autocomplete—when your cell phone or Gmail suggests what the remainder of the word or sentence you're typing might be—is a low-level form of generative AI. Models like ChatGPT and DALL-E just take the idea to significantly more advanced heights.

    Training generative AI models

    The process by which models are developed to accommodate all this data is called training. A couple of underlying techniques are at play here for different types of models. ChatGPT uses what's called a transformer (that's what the T stands for). A transformer derives meaning from long sequences of text to understand how different words or semantic components might be related to one another, then determine how likely they are to occur in proximity to one another. These transformers are run unsupervised on a vast corpus of natural language text in a process called pretraining (that's the in ChatGPT), before being fine-tuned by human beings interacting with the model.

    Another technique used to train models is what's known as a generative adversarial network, or GAN. In this technique, you have two algorithms competing against one another. One is generating text or images based on probabilities derived from a big data set; the other is a discriminative AI, which has been trained by humans to assess whether that output is real or AI-generated. The generative AI repeatedly tries to "trick" the discriminative AI, automatically adapting to favor outcomes that are successful. Once the generative AI consistently "wins" this competition, the discriminative AI gets fine-tuned by humans and the process begins anew.

    One of the most important things to keep in mind here is that, while there is human intervention in the training process, most of the learning and adapting happens automatically. So many iterations are required to get the models to the point where they produce interesting results that automation is essential. The process is quite computationally intensive. 

    Is generative AI sentient?

    The mathematics and coding that go into creating and training generative AI models are quite complex, and well beyond the scope of this article. But if you interact with the models that are the end result of this process, the experience can be decidedly uncanny. You can get DALL-E to produce things that look like real works of art. You can have conversations with ChatGPT that feel like a conversation with another human. Have researchers truly created a thinking machine?

    Chris Phipps, a former IBM natural language processing lead who worked on Watson AI products, says no. He describes ChatGPT as a "very good prediction machine." Phipps says: 'It’s very good at predicting what humans will find coherent. It’s not always coherent (it mostly is) but that’s not because ChatGPT "understands." It’s the opposite: humans who consume the output are really good at making any implicit assumption we need in order to make the output make sense.'

    Phipps, who's also a comedy performer, draws a comparison to a common improv game called Mind Meld: 'Two people each think of a word, then say it aloud simultaneously—you might say "boot" and I say "tree." We came up with those words completely independently and at first, they had nothing to do with each other. The next two participants take those two words and try to come up with something they have in common and say that aloud at the same time. The game continues until two participants say the same word.

    Maybe two people both say "lumberjack." It seems like magic, but really it’s that we use our human brains to reason about the input ("boot" and "tree") and find a connection. We do the work of understanding, not the machine. There’s a lot more of that going on with ChatGPT and DALL-E than people are admitting. ChatGPT can write a story, but we humans do a lot of work to make it make sense.'

    Testing the limits of computer intelligence

    Certain prompts that we can give to these AI models will make Phipps' point fairly evident. For instance, consider the riddle "What weighs more, a pound of lead or a pound of feathers?" The answer, of course, is that they weigh the same (one pound), even though our instinct or common sense might tell us that the feathers are lighter.

    ChatGPT will answer this riddle correctly, and you might assume it does so because it is a coldly logical computer that doesn't have any "common sense" to trip it up. But that's not what's going on under the hood. ChatGPT isn't logically reasoning out the answer; it's just generating output based on its predictions of what should follow a question about a pound of feathers and a pound of lead. Since its training set includes a bunch of text explaining the riddle, it assembles a version of that correct answer. But if you ask ChatGPT whether two pounds of feathers are heavier than a pound of lead, it will confidently tell you they weigh the same amount, because that's still the most likely output to a prompt about feathers and lead, based on its training set. It can be fun to tell the AI that it's wrong and watch it flounder in response; I got it to apologize to me for its mistake and then suggest that two pounds of feathers weigh four times as much as a pound of lead.

    Why does AI art have too many fingers?

    A notable quirk of AI art is that it often represents people with profoundly weird hands. The "weird hands quirk" is becoming a common indicator that the art was artificially generated. This oddity offers more insight into how generative AI does (and doesn't) work. Start with the corpus that DALL-E and similar visual generative AI tools are pulling from: pictures of people usually provide a good look at their face but their hands are often partially obscured or shown at odd angles, so you can't see all the fingers at once. Add to that the fact that hands are structurally complex—they're notoriously difficult for people, even trained artists, to draw. And one thing that DALL-E isn't doing is assembling an elaborate 3D model of hands based on the various 2D depictions in its training set. That's not how it works. DALL-E doesn't even necessarily know that "hands" is a coherent category of thing to be reasoned about. All it can do is try to predict, based on the images it has, what a similar image might look like. Despite huge amounts of training data, those predictions often fall short.

    Phipps speculates that one factor is a lack of negative input: 'It mostly trains on positive examples, as far as I know. They didn't give it a picture of a seven fingered hand and tell it "NO! Bad example of a hand. Don’t do this." So it predicts the space of the possible, not the space of the impossible. Basically, it was never told to not create a seven fingered hand.'

    There's also the factor that these models don't think of the drawings they're making as a coherent whole; rather, they assemble a series of components that are likely to be in proximity to one another, as shown by the training data. DALL-E may not know that a hand is supposed to have five fingers, but it does know that a finger is likely to be immediately adjacent to another finger. So, sometimes, it just keeps adding fingers. (You can get the same results with teeth.) In fact, even this description of DALL-E's process is probably anthropomorphizing it too much; as Phipps says, "I doubt it has even the understanding of a finger. More likely, it is predicting pixel color, and finger-colored pixels tend to be next to other finger-colored pixels."

    Potential negative impacts of generative AI

    These examples show you one of the major limitations of generative AI: what those in the industry call hallucinations, which is a perhaps misleading term for output that is, by the standards of humans who use it, false or incorrect. All computer systems occasionally produce mistakes, of course, but these errors are particularly problematic because end users are unlikely to spot them easily: If you are asking a production AI chatbot a question, you generally won't know the answer yourself. You are also more likely to accept an answer delivered in the confident, fully idiomatic prose that ChatGPT and other models like it produce, even if the information is incorrect. 

    Even if a generative AI could produce output that's hallucination-free, there are various potential negative impacts:

    • Cheap and easy content creation: Hopefully it's clear by now that ChatGPT and other generative AIs are not real minds capable of creative output or insight. But the truth is that not everything that's written or drawn needs to be particularly creative. Many research papers at the high school or college undergraduate level only aim to synthesize publicly available data, which makes them a perfect target for generative AI. And the fact that synthetic prose or art can now be produced automatically, at a superhuman scale, may have weird or unforeseen results. Spam artists are already using ChatGPT to write phishing emails, for instance.
    • Intellectual property: Who owns an AI-generated image or text? If a copyrighted work forms part of an AI's training set, is the AI "plagiarizing" that work when it generates synthetic data, even if it doesn't copy it word for word? These are thorny, untested legal questions.
    • Bias: The content produced by generative AI is entirely determined by the underlying data on which it's trained. Because that data is produced by humans with all their flaws and biases, the generated results can also be flawed and biased, especially if they operate without human guardrails. OpenAI, the company that created ChatGPT, put safeguards in the model before opening it to public use that prevent it from doing things like using racial slurs; however, others have claimed that these sorts of safety measures represent their own kind of bias.
    • Power consumption: In addition to heady philosophical questions, generative AI raises some very practical issues: for one thing, training a generative AI model is hugely compute intensive. This can result in big cloud computing bills for companies trying to get into this space, and ultimately raises the question of whether the increased power consumption—and, ultimately, greenhouse gas emissions—is worth the final result. (We also see this question come up regarding cryptocurrencies and blockchain technology.)

    Use cases for generative AI

    Despite these potential problems, the promise of generative AI is hard to miss. ChatGPT's ability to extract useful information from huge data sets in response to natural language queries has search giants salivating. Microsoft is testing its own AI chatbot, dubbed "Sydney," though it's still in beta and the results have been decidedly mixed.

    But Phipps thinks that more specialized types of search are a perfect fit for this technology. "One of my last customers at IBM was a large international shipping company that also had a billion-dollar supply chain consulting side business," he says.

    Phipps adds: 'Their problem was that they couldn’t hire and train entry level supply chain consultants fast enough—they were losing out on business because they couldn’t get simple customer questions answered quickly. We built a chatbot to help entry level consultants search the company's extensive library of supply chain manuals and presentations that they could turn around to the customer.If I were to build a solution for that same customer today, just a year after I built the first one, I would 100% use ChatGPT and it would likely be far superior to the one I built. What’s nice about that use case is that there is still an expert human-in-the-loop double-checking the answer. That mitigates a lot of the ethical issues. There is a huge market for those kinds of intelligent search tools meant for experts.'

    Other potential use cases include:

    • Code generation: The idea that generative AI might write computer code for us has been bubbling around for years now. It turns out that large language models like ChatGPT can understand programming languages as well as natural spoken languages, and while generative AI probably isn't going to replace programmers in the immediate future, it can help increase their productivity.
    • Cheap and easy content creation: As much as this one is a concern (listed above), it's also an opportunity. The same AI that writes spam emails can write legitimate marketing emails, and there's been an explosion of AI copywriting startups. Generative AI thrives when it comes to highly structured forms of prose that don't require much creativity, like resumes and cover letters.
    • Engineering design: Visual art and natural language have gotten a lot of attention in the generative AI space because they're easy for ordinary people to grasp. But similar techniques are being used to design everything from microchips to new drugs—and will almost certainly enter the IT architecture design space soon enough.

    Conclusion

    Generative AI will surely disrupt some industries and will alter—or eliminate—many jobs. Articles like this one will continue to be written by human beings, however, at least for now. CNET recently tried putting generative AI to work writing articles but the effort foundered on a wave of hallucinations. If you're worried, you may want to get in on the hot new job of tomorrow: AI prompt engineering.

    Author: Josh Fruhlinger

    Source: InfoWorld 

  • Chat GPT's Next Steps   

    Chat GPT's Next Steps 

    Mira Murati wasn’t always sure OpenAI’s generative chatbot ChatGPT was going to be the sensation it has become. When she joined the artificial intelligence firm in 2018, AI’s capabilities had expanded to being good at strategy games, but the sort of language model people use today seemed a long way off.

    “In 2019, we had GPT3, and there was the first time that we had AI systems that kind of showed some sense of language understanding. Before that, we didn’t think it was really possible that AI systems would get this language understanding,” Murati, now chief technology officer at OpenAI, said onstage at the Atlantic Festival on Friday. “In fact, we were really skeptical that was the case.” What a difference a few years makes. These days, users are employing ChatGPT in a litany of ways to enhance their personal and professional lives. “The rate of technological progress has been incredibly steep,” Murati said.

    The climb continues. Here’s what Murati said to expect from ChatGPT as the technology continues to develop.

    You'll be able to actually chat with the chat bots

    You may soon be able to interact with ChatGPT without having to type anything in, Murati said. “We want to move further away from our current interaction,” she said. “We’re sort of slaves to the keyboard and the touch mechanism of the phone. And if you really think about it, that hasn’t really been revolutionized in decades.”

    Murati envisions users being able to talk with ChatGPT the same way they might chat with a friend or a colleague. “That is really the goal — to interact with these AI systems in a way that’s actually natural, in a way that you’d collaborate with someone, and it’s high bandwidth,” she said. “You could talk in text and just exchange messages … or I could show an image and say, ‘Hey, look, I got all these business cards, when I was in these meetings. Can you just put them in my contacts list?’”

    It remains to be seen what kind of hardware could make these sorts of interactions possible, though former Apple designer Jony Ives is reportedly in advanced talks with OpenAI to produce a consumer product meant to be “the iPhone of artificial intelligence.”

    AI will think on deeper levels

    In its current iteration, AI chatbots are good at collaborating with humans and responding to our prompts. The goal, says Murati, is to have the bots think for themselves. “We’re trying to build [a] generally intelligent system. And what’s missing right now is new ideas,” Murati said. “With a completely new idea, like the theory of general relativity, you need to have the capability of abstract thinking.” “And so that’s really where we’re going — towards these systems that will eventually be able to help us with extremely hard problems. Not just collaborate alongside us, but do things that, today, we’re not able to do at all.”

    The everyday ChatGPT user isn’t looking to solve the mysteries of the universe, but one upshot of improving these systems is that chatbots should grow more and more accurate. When asked if ChatGPT would be able to produce answers on par with Wikipedia, Murati said, “It should do better than that. It should be more scientific-level accuracy.”

    With bots that can think through answers, users should be able to “really trace back the pieces of information, ideally, or at least understand why, through reasoning, sort of like a chain of thought, understand why the system got to the answer,” she said.

    Revolution is coming to the way we learn and work

    Murati acknowledged that evolving AI technology will likely disrupt the way that Americans learn and work — a shift that will come with risks and opportunities. Murati noted that students have begun using AI chatbots to complete assignments for them. In response, she says, “In many ways we’ll probably have to change how we teach.” While AI opens the door for academic dishonesty, it also may be a unique teaching tool, she said.

    “Right now you’ve got a teacher in a classroom of 30 students, [and] it’s impossible to customize the learning, the information, to how they best learn,” Murati said. “And this is what AI can offer. It can offer this personalized tutor that customizes learning and teachings to you, to how you best perceive and understand the world.”

    Similar disruption may be coming to workplaces, where there is widespread fear that AI may be taking the place of human employees. “Some jobs will be created, but just like every major revolution, I think a lot of jobs will be lost. There will be maybe, probably, a bigger impact on jobs than in any other revolution, and we have to prepare for this new way of life,” says Murati. “Maybe we work much less. Maybe the workweek changes entirely.”

    No matter what, the revolution is coming. And it will be up to the public and the people who govern us to determine how and how much the AI revolution affects our lives. “I know there’s a lot of engagement right now with D.C. on these topics and understanding the impact on workforce and such, but we don’t have the answers,” Murati said. “We’re gonna have to figure them out along the way, and I think it is going to require a lot of work and thoughtfulness.”

    Date: October 10, 2023

    Author: Ryan Ermey

    Source: CNBC | Make It

  • ChatGPT's Evolution: From CTO Skepticism to Global Sensation

    ChatGPT's Evolution: From CTO Skepticism to Global Sensation

    Mira Murati wasn’t always sure OpenAI’s generative chatbot ChatGPT was going to be the sensation it has become. When she joined the artificial intelligence firm in 2018, AI’s capabilities had expanded to being good at strategy games, but the sort of language model people use today seemed a long way off.

    “In 2019, we had GPT3, and there was the first time that we had AI systems that kind of showed some sense of language understanding. Before that, we didn’t think it was really possible that AI systems would get this language understanding,” Murati, now chief technology officer at OpenAI, said onstage at the Atlantic Festival on Friday. “In fact, we were really skeptical that was the case.”

    What a difference a few years makes. These days, users are employing ChatGPT in a litany of ways to enhance their personal and professional lives. “The rate of technological progress has been incredibly steep,” Murati said. The climb continues. Here’s what Murati said to expect from ChatGPT as the technology continues to develop.

    You may soon be able to interact with ChatGPT without having to type anything in, Murati said. “We want to move further away from our current interaction,” she said. “We’re sort of slaves to the keyboard and the touch mechanism of the phone. And if you really think about it, that hasn’t really been revolutionized in decades.”

    Murati envisions users being able to talk with ChatGPT the same way they might chat with a friend or a colleague. “That is really the goal — to interact with these AI systems in a way that’s actually natural, in a way that you’d collaborate with someone, and it’s high bandwidth,” she said. “You could talk in text and just exchange messages … or I could show an image and say, ‘Hey, look, I got all these business cards, when I was in these meetings. Can you just put them in my contacts list?’”

    It remains to be seen what kind of hardware could make these sorts of interactions possible, though former Apple designer Jony Ives is reportedly in advanced talks with OpenAI to produce a consumer product meant to be “the iPhone of artificial intelligence.” In its current iteration, AI chatbots are good at collaborating with humans and responding to our prompts. The goal, says Murati, is to have the bots think for themselves.

    “We’re trying to build [a] generally intelligent system. And what’s missing right now is new ideas,” Murati said. “With a completely new idea, like the theory of general relativity, you need to have the capability of abstract thinking.” “And so that’s really where we’re going — towards these systems that will eventually be able to help us with extremely hard problems. Not just collaborate alongside us, but do things that, today, we’re not able to do at all.”

    The everyday ChatGPT user isn’t looking to solve the mysteries of the universe, but one upshot of improving these systems is that chatbots should grow more and more accurate. When asked if ChatGPT would be able to produce answers on par with Wikipedia, Murati said, “It should do better than that. It should be more scientific-level accuracy.” With bots that can think through answers, users should be able to “really trace back the pieces of information, ideally, or at least understand why, through reasoning, sort of like a chain of thought, understand why the system got to the answer,” she said.

    Murati acknowledged that evolving AI technology will likely disrupt the way that Americans learn and work — a shift that will come with risks and opportunities. Murati noted that students have begun using AI chatbots to complete assignments for them. In response, she says, “In many ways we’ll probably have to change how we teach.” While AI opens the door for academic dishonesty, it also may be a unique teaching tool, she said. “Right now you’ve got a teacher in a classroom of 30 students, [and] it’s impossible to customize the learning, the information, to how they best learn,” Murati said. “And this is what AI can offer. It can offer this personalized tutor that customizes learning and teachings to you, to how you best perceive and understand the world.”

    Similar disruption may be coming to workplaces, where there is widespread fear that AI may be taking the place of human employees. “Some jobs will be created, but just like every major revolution, I think a lot of jobs will be lost. There will be maybe, probably, a bigger impact on jobs than in any other revolution, and we have to prepare for this new way of life,” says Murati. “Maybe we work much less. Maybe the workweek changes entirely.”

    No matter what, the revolution is coming. And it will be up to the public and the people who govern us to determine how and how much the AI revolution affects our lives. “I know there’s a lot of engagement right now with D.C. on these topics and understanding the impact on workforce and such, but we don’t have the answers,” Murati said. “We’re gonna have to figure them out along the way, and I think it is going to require a lot of work and thoughtfulness.”

    Date: November 13, 2023

    Author: Ryan Ermey

    Source: CNBC Make It

  • Data Disasters: 8 Infamous Analytics and AI Failures

    Data Disasters: 8 Infamous Analytics and AI Failures

    Insights from data and machine learning algorithms can be invaluable, but mistakes can cost you reputation, revenue, or even lives. These high-profile analytics and AI blunders illustrate what can go wrong.

    In 2017, The Economist declared that data, rather than oil, had become the world’s most valuable resource. The refrain has been repeated ever since. Organizations across every industry have been and continue to invest heavily in data and analytics. But like oil, data and analytics have their dark side.

    According to CIO’s State of the CIO 2023 report, 34% of IT leaders say that data and business analytics will drive the most IT investment at their organization this year. And 26% of IT leaders say machine learning/artificial intelligence will drive the most IT investment. Insights gained from analytics and actions driven by machine learning algorithms can give organizations a competitive advantage, but mistakes can be costly in terms of reputation, revenue, or even lives.

    Understanding your data and what it’s telling you is important, but it’s also important to understand your tools, know your data, and keep your organization’s values firmly in mind.

    Here are a handful of high-profile analytics and AI blunders from the past decade to illustrate what can go wrong.

    ChatGPT hallucinates court cases

    Advances made in 2023 by large language models (LLMs) have stoked widespread interest in the transformative potential of generative AI across nearly every industry. OpenAI’s ChatGPT has been at the center of this surge in interest, foreshadowing how generative AI holds the power to disrupt the nature of work in nearly every corner of business.

    But the technology still has ways to go before it can reliably take over most business processes, as attorney Steven A. Schwartz learned when he found himself in hot water with US District Judge P. Kevin Castel in 2023 after using ChatGPT to research precedents in a suit against Colombian airline Avianca.

    Schwartz, an attorney with Levidow, Levidow & Oberman, used the OpenAI generative AI chatbot to find prior cases to support a case filed by Avianca employee Roberto Mata for injuries he sustained in 2019. The only problem? At least six of the cases submitted in the brief did not exist. In a document filed in May, Judge Castel noted the cases submitted by Schwartz included false names and docket numbers, along with bogus internal citations and quotes.

    In an affidavit, Schwartz told the court that it was the first time he had used ChatGPT as a legal research source and he was “unaware of the possibility that its content could be false.” He admitted that he had not confirmed the sources provided by the AI chatbot. He also said that he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

    As of June 2023, Schwartz was facing possible sanctions by the court.

    AI algorithms identify everything but COVID-19

    Since the COVID-19 pandemic began, numerous organizations have sought to apply machine learning (ML) algorithms to help hospitals diagnose or triage patients faster. But according to the UK’s Turing Institute, a national center for data science and AI, the predictive tools made little to no difference.

    MIT Technology Review has chronicled a number of failures, most of which stem from errors in the way the tools were trained or tested. The use of mislabeled data or data from unknown sources was a common culprit.

    Derek Driggs, a machine learning researcher at the University of Cambridge, together with his colleagues, published a paper in Nature Machine Intelligence that explored the use of deep learning models for diagnosing the virus. The paper determined the technique not fit for clinical use. For example, Driggs’ group found that their own model was flawed because it was trained on a data set that included scans of patients that were lying down while scanned and patients that were standing up. The patients who were lying down were much more likely to be seriously ill, so the algorithm learned to identify COVID risk based on the position of the person in the scan.

    A similar example includes an algorithm trained with a data set that included scans of the chests of healthy children. The algorithm learned to identify children, not high-risk patients.

    Zillow wrote down millions of dollars, slashed workforce due to algorithmic home-buying disaster

    In November 2021, online real estate marketplace Zillow told shareholders it would wind down its Zillow Offers operations and cut 25% of the company’s workforce — about 2,000 employees — over the next several quarters. The home-flipping unit’s woes were the result of the error rate in the machine learning algorithm it used to predict home prices.

    Zillow Offers was a program through which the company made cash offers on properties based on a “Zestimate” of home values derived from a machine learning algorithm. The idea was to renovate the properties and flip them quickly. But a Zillow spokesperson told CNN that the algorithm had a median error rate of 1.9%, and the error rate could be much higher, as much as 6.9%, for off-market homes.

    CNN reported that Zillow bought 27,000 homes through Zillow Offers since its launch in April 2018 but sold only 17,000 through the end of September 2021. Black swan events like the COVID-19 pandemic and a home renovation labor shortage contributed to the algorithm’s accuracy troubles.

    Zillow said the algorithm had led it to unintentionally purchase homes at higher prices that its current estimates of future selling prices, resulting in a $304 million inventory write-down in Q3 2021.

    In a conference call with investors following the announcement, Zillow co-founder and CEO Rich Barton said it might be possible to tweak the algorithm, but ultimately it was too risky.

    UK lost thousands of COVID cases by exceeding spreadsheet data limit

    In October 2020, Public Health England (PHE), the UK government body responsible for tallying new COVID-19 infections, revealed that nearly 16,000 coronavirus cases went unreported between Sept. 25 and Oct. 2. The culprit? Data limitations in Microsoft Excel.

    PHE uses an automated process to transfer COVID-19 positive lab results as a CSV file into Excel templates used by reporting dashboards and for contact tracing. Unfortunately, Excel spreadsheets can have a maximum of 1,048,576 rows and 16,384 columns per worksheet. Moreover, PHE was listing cases in columns rather than rows. When the cases exceeded the 16,384-column limit, Excel cut off the 15,841 records at the bottom.

    The “glitch” didn’t prevent individuals who got tested from receiving their results, but it did stymie contact tracing efforts, making it harder for the UK National Health Service (NHS) to identify and notify individuals who were in close contact with infected patients. In a statement on Oct. 4, Michael Brodie, interim chief executive of PHE, said NHS Test and Trace and PHE resolved the issue quickly and transferred all outstanding cases immediately into the NHS Test and Trace contact tracing system.

    PHE put in place a “rapid mitigation” that splits large files and has conducted a full end-to-end review of all systems to prevent similar incidents in the future.

    Healthcare algorithm failed to flag Black patients

    In 2019, a study published in Science revealed that a healthcare prediction algorithm, used by hospitals and insurance companies throughout the US to identify patients to in need of “high-risk care management” programs, was far less likely to single out Black patients.

    High-risk care management programs provide trained nursing staff and primary-care monitoring to chronically ill patients in an effort to prevent serious complications. But the algorithm was much more likely to recommend white patients for these programs than Black patients.

    The study found that the algorithm used healthcare spending as a proxy for determining an individual’s healthcare need. But according to Scientific American, the healthcare costs of sicker Black patients were on par with the costs of healthier white people, which meant they received lower risk scores even when their need was greater.

    The study’s researchers suggested that a few factors may have contributed. First, people of color are more likely to have lower incomes, which, even when insured, may make them less likely to access medical care. Implicit bias may also cause people of color to receive lower-quality care.

    While the study did not name the algorithm or the developer, the researchers told Scientific American they were working with the developer to address the situation.

    Dataset trained Microsoft chatbot to spew racist tweets

    In March 2016, Microsoft learned that using Twitter interactions as training data for machine learning algorithms can have dismaying results.

    Microsoft released Tay, an AI chatbot, on the social media platform. The company described it as an experiment in “conversational understanding.” The idea was the chatbot would assume the persona of a teen girl and interact with individuals via Twitter using a combination of machine learning and natural language processing. Microsoft seeded it with anonymized public data and some material pre-written by comedians, then set it loose to learn and evolve from its interactions on the social network.

    Within 16 hours, the chatbot posted more than 95,000 tweets, and those tweets rapidly turned overtly racist, misogynist, and anti-Semitic. Microsoft quickly suspended the service for adjustments and ultimately pulled the plug.

    “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, corporate vice president, Microsoft Research & Incubations (then corporate vice president of Microsoft Healthcare), wrote in a post on Microsoft’s official blog following the incident.

    Lee noted that Tay’s predecessor, Xiaoice, released by Microsoft in China in 2014, had successfully had conversations with more than 40 million people in the two years prior to Tay’s release. What Microsoft didn’t take into account was that a group of Twitter users would immediately begin tweeting racist and misogynist comments to Tay. The bot quickly learned from that material and incorporated it into its own tweets.

    “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” Lee wrote.

    Like many large companies, Amazon is hungry for tools that can help its HR function screen applications for the best candidates. In 2014, Amazon started working on AI-powered recruiting software to do just that. There was only one problem: The system vastly preferred male candidates. In 2018, Reuters broke the news that Amazon had scrapped the project.

    Amazon’s system gave candidates star ratings from 1 to 5. But the machine learning models at the heart of the system were trained on 10 years’ worth of resumes submitted to Amazon — most of them from men. As a result of that training data, the system started penalizing phrases in the resume that included the word “women’s” and even downgraded candidates from all-women colleges.

    At the time, Amazon said the tool was never used by Amazon recruiters to evaluate candidates.

    The company tried to edit the tool to make it neutral, but ultimately decided it could not guarantee it would not learn some other discriminatory way of sorting candidates and ended the project.

    Target analytics violated privacy

    In 2012, an analytics project by retail titan Target showcased how much companies can learn about customers from their data. According to the New York Times, in 2002 Target’s marketing department started wondering how it could determine whether customers are pregnant. That line of inquiry led to a predictive analytics project that would famously lead the retailer to inadvertently reveal to a teenage girl’s family that she was pregnant. That, in turn, would lead to all manner of articles and marketing blogs citing the incident as part of advice for avoiding the “creepy factor.”

    Target’s marketing department wanted to identify pregnant individuals because there are certain periods in life — pregnancy foremost among them — when people are most likely to radically change their buying habits. If Target could reach out to customers in that period, it could, for instance, cultivate new behaviors in those customers, getting them to turn to Target for groceries or clothing or other goods.

    Like all other big retailers, Target had been collecting data on its customers via shopper codes, credit cards, surveys, and more. It mashed that data up with demographic data and third-party data it purchased. Crunching all that data enabled Target’s analytics team to determine that there were about 25 products sold by Target that could be analyzed together to generate a “pregnancy prediction” score. The marketing department could then target high-scoring customers with coupons and marketing messages.

    Additional research would reveal that studying customers’ reproductive status could feel creepy to some of those customers. According to the Times, the company didn’t back away from its targeted marketing, but did start mixing in ads for things they knew pregnant women wouldn’t buy — including ads for lawn mowers next to ads for diapers — to make the ad mix feel random to the customer.

    Date: July 3, 2023

    Author: Thor Olavsrud

    Source: CIO

  • From Buzz to Bust: Investigating IT's Overhyped Technologies  

    From Buzz to Bust: Investigating IT's Overhyped Technologies

    CIOs are not immune to infatuation with the promise of emerging tech. Here, IT leaders and analysts share which technologies they believe are primed to underdeliver, offering advice on right-sizing expectations for each one.

    Most CIOs and IT staffers remain, at heart, technologists, with many proclaiming their interest in shiny new tech toys. They may publicly preach “No technology for technology’s sake,” but they still frequently share their fascination with the latest tech gadgets. They’re not the only ones enthralled by tech.

    With technology and tech news now both pervasive and mainstream, many outside of IT — from veteran board members to college-age interns — are equally enthusiastic about bleeding-edge technologies. But all that interest can quickly blow past buzz and hit hype — that is, the point where the technology gets seen more as a panacea for whatever plagues us rather than the helpful tool that it is. It’s then that the hopes for the technology get way ahead of what it can actually deliver today. 

    “Nearly every new technology is naturally accompanied by hype and/or fear, but at the same time there is almost always a core of merit and business value to that new tech. The challenge is moving from the initial vision/promise stage, to broad commercial and consumer adoption and pervasiveness,” says George Corbin, board director at Edgewell Personal Care; former chief digital officer at Marriott and Mars Inc.; a faculty member at the National Association of Corporate Directors; and an active member of the MIT Sloan CIO Symposium community.

    With that in mind, we asked tech leaders in various roles and industries to list what technologies they think are overhyped and to put a more realistic spin on each one’s potential. Here’s what they say on the topic.

    1. Generative AI

    Perhaps not surprisingly, generative AI tops the list of today’s overhyped tech. No one denies its transformative potential, but digital leaders say a majority of people seem to think generative AI, which Gartner recently placed at the peak of inflated expectations in its 2023 hype cycle, has more capabilities than it does — at least at this time.

    Consider some recent survey findings. A July 2023 report from professional services firm KPMG found that 97% of the 200 senior US business leaders it polled anticipate that generative AI will have a huge impact on their organizations in the short term, 93% believe it will provide value to their business, and 80% believe it will disrupt their industry.

    Yet most execs also admit they’re not ready to fully harness that potential. Another July report, the IDC Executive Preview, sponsored by Teradata, titled “The Possibilities and Realities of Generative AI,” found that 86% of the 900 execs it polled believe more governance is needed to ensure the quality and integrity of gen AI insights, with 66% expressing concerns around gen AI’s potential for bias and disinformation. Additionally, only 30% say they’re extremely prepared or even ready to leverage generative AI today and just 42% fully believe that they’ll have the skills in place to implement the technology in the next 6 to 12 months, among other issues their gen AI strategies face today.

    At the same time, today’s hype may be distracting enterprise leaders from fully understanding how generative AI (also known as GAI) will evolve and how they can use that power in the future. “The anticipation and fear of the impact of generative AI in particular, and its relationship to artificial general intelligence (AGI), makes it overhyped,” says Daryl Cromer, vice president and CTO for the PCs and smart devices division at Lenovo.

    This overhyped state, he adds, makes it “easy to be overly optimistic about what will happen this year and simultaneously understate what will happen in three to five years.” He says generative AI’s “potential is great; it will transform many industries. But it should be noted that digital transformation is complex and time consuming; it’s not like a firm can just take a GAI ‘black box’ and plug it into their business and achieve increased efficiency right away. There’s more likely to be a J-curve to ROI as a firm incurs expenses acquiring the technology and spends on cloud services to support it. Firms could even encounter pushback from affected stakeholders, like they are now with the case of film and television writers and actors.”

    2. Quantum computing

    Tech giants, startups, research institutions, and even governments are all working on or investing in quantum computing. There’s good reason for all that interest: Quantum computing uses quantum mechanics principles to perform calculations and, thus, is exponentially faster and more powerful than today’s computing capabilities. 

    Yet it’s anyone’s guess when, exactly, this new type of computing will become operational. There’s even more uncertainty on when, and whether, quantum computing would become available for anyone outside the small circle of players already in the space today.

    “People may think it’s going to replace [our classical computing] computers but it’s not,” at least in the foreseeable future, says Brian Hopkins, vice president for the emerging tech portfolio at research firm Forrester. Hopkins adds: “You see these big announcements from IBM or Google about quantum computing and people think, ‘Quantum is close.’ Those make great headlines, but the truth about quantum computing’s future is far more nuanced and [business leaders] need to understand that.”

    Yet that isn’t holding back expectations. A 2022 survey of 501 UK executives by professional services firm EY found that 97% expect quantum computing to disrupt their sectors to a high or moderate extent, with 48% believing “that quantum computing will reach sufficient maturity to play a significant role in the activities of most companies in their respective sectors by 2025.” The EY survey also reveals how unprepared organizations are to meet what they believe is ahead: Only 33% said their organizations have started to plan to prepare for the technology’s commercialization and only 24% have set up or plan to set up pilot teams to explore its potential.

    “People are aware quantum computing is coming, but I think there is an underestimation of what it will take [to leverage its power],” adds Seth Robinson, vice president for industry research at trade association CompTIA. “I think people think it’s just going to be a much more powerful way of running what we already have, but in reality what we have is going to have to be rewritten to work with quantum. You won’t be able to just swap out the engine. And it’s not going to turn into a product for the mass market.”

    3. The metaverse — and extended reality in general

    Although some of the excitement about the coming metaverse has died down, some say this concept remains overplayed. They’re skeptical of any claims that the metaverse will have us all living in a new digital realm, and they question whether the metaverse will have any big impact on daily life and everyday business anytime soon. Same goes for extended reality (XR) — that fusion of augmented reality, virtual reality and mixed reality.

    “Virtual spaces provide a completely different experience, popularly known as an immersive experience for customers. However, in my opinion, the actual market potential may probably not be as big as it is being projected now,” says Richard August, managing partner for CIO Advisory Services at Tata Consultancy Services. “The number of use cases and utility values are limited, impacting the potential. Devices to support the ubiquity of these technologies such as VR sets are not available at a scalable, affordable price. Additionally, there have been several instances of negative health effects — such as fatigue, impact on vision and hearing — being reported by using the devices that support these technologies, which limits large-scale adoption.”

    Forrester’s Hopkins voices similar caution on the technology’s uptake in the near term. “The form factors today aren’t enticing enough for people to adopt this new technology, so [adoption] is going to take longer than people may think,” he says. Hopkins says researchers do, indeed, see areas where the technology has taken off. Extended reality is useful in HR for training employees, and it provides value in industrial use cases where a digital overlay can guide workers through complex scenarios. “But that’s a pretty small slice of the overall opportunity,” he adds.

    4. Web3: Blockchain, NFTs, and cryptocurrencies

    Similar to their feelings about the immersive web, tech leaders say Web3 and its components — blockchain, NFTs, and cryptocurrencies — haven’t quite delivered on all their promises. “They just need to see more maturity before we invest in those things,” says Rebecca Fox, group CIO for NCC Group, a UK-headquartered IT security company.

    Others have made similar observations. Corbin, for one, says blockchain has “huge business potential in smart contracts — supply chain transparency, healthcare, finance, currency, artwork, media, fraud prevention, IP protection, deep fake mitigation — but slow uptake on implementing.” He points out that it’s not as impenetrable as first promoted, and it’s hard to scale. Meanwhile, its decentralized nature coupled with a lack of regulation means that blockchain contracts are not legally recognized in most countries yet, he adds. Digital experts cite issues with other Web3 technologies, too, noting that most companies can’t figure out what to do with cryptocurrencies, for example, as they struggle with how to account for them and how to report them out to the street.

    Furthermore, many people remain skeptical about cryptocurrencies and NFTs — especially after the past year’s headlines about crypto exchange problems and NFT devaluations. Advisers say CIOs should, thus, be mindful of the hype but nonetheless keep a watchful eye on the development of these technologies. “Though it’s in its early stages, we’re seeing lots of momentum behind the shift from Web2 to Web3 — and now Web4 — which will undoubtedly transform the way businesses operate, and how we own and transact property. It holds a lot of promise for the philosophical sense of property, ownership, and self-control of your identity inside the broader digital world at large,” says Jeff Wong, EY’s global chief innovation officer. He adds: “At this stage, Web3/4 is an idea that creates more questions than answers, but we think the questions are worth considering.”

    Date: August 22, 2023

    Author: Mary K. Pratt

    Source: CIO

  • From Visualization to Analytics: Generative AI's Data Mastery

    From Visualization to Analytics: Generative AI's Data Mastery

    Believe it or not, generative AI is more than just text in a box. The truth is that it transcends the boundaries of traditional creative applications. So what it does is it extends the capabilities of the user far beyond text generation. It’s an art. In addition to its prowess in crafting captivating narratives and artistic creations, generative AI demonstrates its versatility by helping users empower their own data analytics. 

    With its advanced algorithms and language comprehension, it can navigate complex datasets and distill valuable insights. This transformative shift underscores the convergence of creativity and analysis, as generative AI empowers users to harness its intelligence for data-driven decision-making. 

    From uncovering hidden patterns to providing actionable recommendations, generative AI’s proficiency in data analytics heralds a new era where innovation spans the spectrum from artistic expression to informed business strategies. 

    So let’s take a brief look at some examples of how generative AI can be used for data analytics. 

    Datasets for Analysis

    Our first example is its capacity to perform data analysis when provided with a dataset. Imagine equipping generative AI with a dataset rich in information from various sources. Through its proficient understanding of language and patterns, it can swiftly navigate and comprehend the data, extracting meaningful insights that might have remained hidden by the casual viewer. Even experts can miss patterns after a while, but for AI, it’s made to detect them.

    All of this goes beyond mere computation. By crafting human-readable summaries and explanations, AI is able to make the findings accessible to a wider audience, especially to non-expert stakeholders who may not have a deep-level understanding of what they’re being shown. 

    This symbiotic fusion of data analysis and natural language generation underscores AI’s role as a versatile partner in unraveling the layers of information that drive informed decisions.

    Data Visualization Through Charts

    The second example of how generative AI is multifaceted is its ability to create user-friendly charts that seamlessly integrate with other data visualization tools. Suppose you have a dataset and require a visual representation that’s both insightful and easily transferable to other programs. Generative AI can step up to the plate by creating charts that are not only visually appealing but also tailored to your data’s characteristics. 

    Whether it’s a bar graph, scatter plot, or line chart, generative AI can provide charts ready for your preferred mode of visualization. This streamlined process bridges the gap between data analysis and visualization, empowering users to effortlessly harness their data’s potential for impactful presentations and strategic insights.

    Idea Generation

    This isn’t isolated to just data analytics. Most marketers have found that generative AI tools are great at this. That’s because the technology is great at helping its human users with idea generation and refining concepts by acting as a collaborative brainstorming partner. Consider a scenario where you’re exploring a new project or problem-solving endeavor. Engaging generative AI allows you to bounce ideas off of it, unveiling a host of potential questions and perspectives that might not have otherwise occurred to you. 

    Through its adept analysis of the input and context, generative AI not only generates thought-provoking questions but also offers insights that help you delve deeper into your topic. This relationship between the human user and the AI transforms generative AI into an invaluable ally, driving the exploration of ideas, prompting critical thinking, and guiding the conversation toward uncharted territories of creativity and innovation.

    Cleaning Up Data and Finding Anomalies

    As mentioned above, generative AI has a knack for finding patterns, and these patterns aren’t just isolated to being positive. With a good generative AI program, a data team can take on even the meticulous task of data cleaning and anomaly detection. Picture a dataset laden with imperfections and anomalies that could skew analysis results. The AI can be harnessed to comb through the data, identifying inconsistencies, outliers, and irregularities that might otherwise go unnoticed. 

    Again, AI has a keen eye for patterns and deviations to aid in ensuring the integrity of the dataset. Human error is human error, but with AI, that error can be reduced significantly. Furthermore, generative AI doesn’t just flag anomalies—it provides insights into potential causes and implications. This fusion of data cleaning and analysis empowers users to navigate the complexities of their data landscape with confidence, making informed decisions based on reliable, refined datasets.

    Creating Synthetic Data

    Synthetic data generation is yet another facet where generative AI’s adaptability shines. When faced with limited or sensitive datasets, the AI can step in to generate synthetic data that mimics the characteristics of the original information. This synthetic data serves as a viable alternative for training models, testing algorithms, and ensuring privacy compliance. By leveraging its understanding of data patterns and structures, 

    Generative AI crafts synthetic datasets that maintain statistical fidelity while safeguarding sensitive information. This innovative application showcases generative AI’s role in bridging data gaps and enhancing the robustness of data-driven endeavors, providing a solution that balances the need for accurate analysis with the imperative of data security.

    Conclusion

    Some great stuff huh? As you have just read, generative AI isn’t only for creating amazing images, or a chatbot that can help office workers with their tasks. It’s a technology that if utilized correctly can help any data professionals supercharge their data analytics. Now, are you ready to learn more?

    Date: September 22, 2023

    Author:

    Source: ODSC

  • Functions and applications of generative AI models

    Functions and applications of generative AI models

    Learn how industries use generative AI models, which function on their own to create new content and alongside discriminative models to identify, for example, 'real' vs. 'fake.'

    AI encompasses many techniques for developing software models that can accomplish meaningful work, including neural networks, genetic algorithms and reinforcement learning. Previously, only humans could perform this work. Now, these techniques can build different kinds of AI models.

    Generative AI models are one of the most important kinds of AI models. A generative model creates things. Any tool that uses AI to generate a new output -- a new picture, a new paragraph or a new machine part design -- incorporates a generative model.

    The various applications for generative models

    Generative AI functions across a broad spectrum of applications, including the following:

    • Natural language interfaces. In performing both speech and text synthesis, these AI systems power digital assistants such as Amazon's Alexa, Apple's Siri and Google Assistant, as well as tools that auto-summarize text or autogenerate press releases from a set of key facts.
    • Image synthesis. These AI systems create images based on instructions or directions. They will, if told to, create an image of a kiwi bird eating a kiwi fruit while sitting on a big padlock key. They can be used to create ads, fashion designs or movie production storyboards. DALL-E, Midjourney and Wombo Dream are examples of AI image generators.
    • Space synthesis. AI can also create three-dimensional spaces and objects, both real and digital. It can design buildings, rooms and even whole city plans, as well as virtual spaces for gameplay or metaverse-style collaboration. Spacemaker is a real-world architectural program, while Meta's BuilderBot (in development) will focus on virtual spaces.
    • Product design and object synthesis. Now that the public is more aware of 3D printing, it's worth noting that generative AI can design and even create physical objects like machine parts and household goods. AutoCAD and SOL75 are tools using AI to perform or assist in physical object design.

    Many tools harness both generative and discriminative AI models. Discriminative models, adversely, identify things. Any tool that uses AI to identify, categorize, tag or assess the authenticity of an artifact (physical or digital) incorporates a discriminative model. A discriminative model typically doesn't say categorically what something is, but rather what it most likely is based on what it sees.

    How generative and discriminative models function together

    A generative adversarial network (GAN) uses a generative model to create outputs and an adversarial discriminative model to evaluate them, with feedback loops between the two. For example, a GAN might be tasked with writing fake restaurant reviews. The generative model would attempt to create seemingly real reviews, then pass them, along with real reviews, through the discriminative model. The discriminator acts as an adversary to the generative model, trying to identify the fakes.

    The feedback loops ensure that the exercise trains both models to perform better. The discriminator, which is then told which inputs were real and which were fake after evaluating them, adjusts itself to get better at identifying fakes and not flagging real reviews as fake. The generator gets better at generating undetectable fakes as it learns which fakes the discriminator successfully identified and which authentic reviews it incorrectly tagged.

    This phenomenon is applied in the following industries:

    • Finance. AI systems watch transaction streams in real time and analyze them in the context of a person's history to judge whether a transaction is authentic or fraudulent. All major banks and credit card companies use such software now; some develop their own and others use commercially available solutions.
    • Manufacturing. Factory AI systems can watch streams of inputs and outputs using cameras, x-rays, etc. They can flag or deflect parts and products likely to be defective. Kyocera Communications and Foxconn both use AI for visual inspection in their facilities.
    • Film and media. Just as generative tools can create fake images (e.g., a kiwi bird eating kiwi on a key), discriminative AI can identify faked images or audio files. Google's Jigsaw division focuses in part on developing technology to make deepfake detection more reliable and easier.
    • Social media and tech industry. AI systems can look at postings and patterns in postings to help spot fake accounts by disinformation bots or other bad actors. Meta has used AI for years to help find fake accounts and to flag or block COVID misinformation related to the pandemic.

    Generative AI may well become a widely known tech buzzword, like automation, and its myriad applications prove that this nascent branch of AI is here to stay. To meet modern challenges facing the tech industry, it only makes sense that this technology will expand and become deeply embedded in more and more enterprises.

    Author: John Burke

    Source: TechTarget

  • Generative AI Market Expected to Grow to $1.3 trillion in Coming Ten Years  

    Generative AI Market Expected to Grow to $1.3 trillion in Coming Ten Years

    The release of consumer-focused artificial intelligence tools such as ChatGPT and Google’s Bard is set to fuel a decade-long boom that grows the market for generative AI to an estimated $1.3 trillion in revenue by 2032 from $40 billion last year. 

    The sector could expand at a rate of 42% over ten years — driven first by the demand for infrastructure necessary to train AI systems and then the ensuing devices that use AI models, advertising and other services, according to a new report by Bloomberg Intelligence analysts led by Mandeep Singh. 

    “The world is poised to see an explosion of growth in the generative AI sector over the next ten years that promises to fundamentally change the way the technology sector operates,” Singh said in a statement Thursday. “The technology is set to become an increasingly essential part of IT spending, ad spending and cybersecurity as it develops.”

    Demand for generative AI has boomed worldwide since ChatGPT’s release late last year, with the technology poised to disrupt everything from customer service to banking. It uses large samples of data, often harvested from the internet, to learn how to respond to prompts, allowing it to create realistic-looking images and answers to queries that appear to be from a real person. 

    Amazon.com Inc.’s cloud division, Google parent Alphabet Inc., Nvidia Corp. and Microsoft Corp., which has invested billions of dollars in OpenAI, are likely to be among the biggest winners from the AI boom, according to the report. 

    The largest driver of revenue growth from generative AI will come from demand for the infrastructure needed to train AI models, according to Bloomberg Intelligence’s forecasts, amounting to an estimated $247 billion by 2032. The AI-assisted digital ads business is expected to reach $192 billion in annual revenue by 2032, and revenue from AI servers could hit $134 billion, the report said.

    Investors, meanwhile, took a pause from their obsession with all things AI on Thursday. The software firm C3.ai fell as much as 24% in New York, extending Wednesday’s 9% decline following a disappointing sales outlook. 

    Chipmaker Nvidia, which has emerged as Wall Street’s biggest AI bet, resumed its rally, rising 3.3%. Its shares have soared by 28% since May 24 and the Silicon Valley firm briefly reached a $1 trillion valuation this week. 

    Date: June 2, 2023

    Author: Jake Rudnitsky 

    Source: Bloomberg

  • Mastering Data Governance: A Guide for Optimal Results

    Mastering Data Governance: A Guide for Optimal Results

    With digital transformation initiatives on the rise, organizations are investing more in Data Governance, a formalized practice that connects different components and increases data’s value. Some may already have established Data Governance programs for older Data Management systems (such as for controlling master data) but may lack control in newer technologies like training an AI to generate content and make need guidance in best practices to follow.

    Steve Zagoudis, a leading authority on Data Governance, notes that a lack of awareness explains some of the disconnects in applying lessons learned from past Data Governance to newer programs. What’s more, Data Governance has a bad reputation as a drag on innovation and technological advancement because of perceived meaningless workflows. 

    To turn around these trends, companies should embrace Data Governance best practices that can adapt to new situations. Furthermore, businesses must demonstrate how these activities are relevant to the organization. Using the tactics outlined below promises to achieve these goals. 

    Lead by Doing 

    With Data Governance, actions speak louder than words, especially regarding newer projects using newer technologies. Any communications from the top-down or bottom-up need to show how Data Governance activities align with business innovations. Try having:

    • Executives lead as engaged sponsors: “Executives need to support and sponsor Data Governance wherever data is,” advises Bob Seiner. Often, a data catalog (a centralized metadata inventory) can help guide executives on where to apply Data Governance. When implementing Data Governance, managers should communicate consistently and clearly about the approach, roles, and value of Data Governance. They need to emphasize that these aspects apply to new projects too. Moreover, senior leadership needs to visibly support and allocate resources – time, money, technology, etc. – toward data stewardship, formalizing accountability and responsibility for company data and its processes. 
    • Data stewards lead through information sharing: Data stewards typically have hands-on experience with company data. Consequently, these workers are a treasure trove of knowledge valuable to their co-workers, manager, and other organizations. Not only does this information exchange help others in the company learn, but sharing also activates data stewards and keeps them highly invested in Data Governance practices. With this advantage, stewards are more likely to extend their work to newer projects.
    • All employees lead by applying a company’s Data Governance best practices: All employees take care of the Data Quality and communicate when they have questions or helpful feedback. Business leaders should provide two-way channels for stewards to encourage Data Governance adoption among their departments and allow users to express their problems or ask questions.

    Understand the “Why”

    Business requirements change quickly as companies become more data-driven. For example, the metadata requirements previously used to describe application error data and set forth by Data Governance may need a different format to train a generative AI model to suggest fixes.

    To keep Data Governance relevant, teams must create actionable use cases and connect the dots to the Data Governance’s activities. Out of this work should come a purpose statement defining success with the measurements and stories to show company project progress achieved from Data Governance.

    Data Governance purpose statements help navigate the support needs of data products, ready-to-use, high-quality data from services developed by team members. To justify updates to Data Governance processes, business leaders should present new data products as a proof of concept and explain a roadmap to get to the changes. Consider integrating a few critical Data Governance activities and how they benefit the data product in the presentation.

    By using the Data Governance purpose statement as a guide and building out solid use cases tied to data products, teams can understand the benefits of good Data Governance and the consequences of poor Data Governance. Furthermore, this messaging solidifies when it is repeated and becomes self-evident through data product usage and product maturity.

    Cover Data Governance Capabilities

    Before starting or expanding new projects, organizations must be clear about their capabilities to adapt to Data Governance activities. For example, if a software application needs to ship in three months, and three-quarters of the team must spend 90% of their time and money getting the technology running and fixing bugs, then Data Governance resources for metadata management through Data Governance will be scarce.

    To get a complete picture, organizations usually assess where their Data Governance and its best practices stand today, addressing best practices and maturity.

    Once companies have compiled feedback and metrics about their Data Governance practices, they can share recommendations with stakeholders and quickly check improvements and goals as they apply Data Governance. As resources fluctuate, business leaders might consider bringing Data Governance into project daily standups or scrum meetings to track and communicate progress.

    As project managers and engineers help one another when blocked, they can note when a data product story with Data Governance activities has been completed. In addition, adding Data Governance to daily meetings can prompt team members to bring back Data Governance components that have worked in the past – data, roles, processes, communications, metrics, and tools – and reuse them to solve current issues. 

    Implement a Well-Designed Data Governance Framework

    A well-designed Data Governance framework provides components that structure an organization’s Data Governance program. Implementing such a framework means that Data Governance assures an organization of reliable data with a good balance between accessibility and security.

    Over 60% of organizations have some Data Governance that is in the initial stages, according to the recent Trends in Data Management report. Existing Data Governance programs can take many different formats, including:

    • Command-and-Control: A top-down approach that sets the Data Governance rules and assigns employees to follow them
    • Formalized: Training programs constructed as part of an organization’s data literacy initiative to encourage Data Governance practices
    • Non-Invasive: A formalization of existing roles 
    • Adaptive: A set of Data Governance principles and definitions that can be applied flexibly and made part of business operations using a combination of styles

    The best approach works with the company culture and aligns with their data strategies, combining choices and decisions that lead to high-level goals. 

    Gather the metrics and feedback about Data Governance capabilities to understand what processes, guidelines, and roles exist and are working. Then, decide how many existing components can be used versus how much work needs to reframe the Data Governance approach. 

    For example, a command-and-control construction may allow enough flexibility in a start-up environment with two or three people; however, as a company adds more employees, Data Governance may need to be reformulated to a non-invasive or more adaptive approach. 

    Evaluate automation, such as a data catalog or Data Governance tools, regardless of the Data Governance framework chosen. Ideally, companies want automation that empowers workers in decision-making and adapts as necessary to the Data Governance purpose.

    Develop an Iterative Process

    To adapt, companies must develop an iterative process with their Data Governance components. This tactic means flexibility in adjusting goals to get to the Data Governance purpose.

    For example, a Data Governance program’s purpose ensures Data Quality – data that is fit for consumption. Initially, Data Governance members discuss critical data elements around a data model built by a team. 

    Should this task lead to unresolved disagreements after a sprint, business leaders can try shifting gears. Shelve the debate and focus on connecting terminology to shared automation tools the members use.

    Specific Data Governance processes may need updates as data moves between older and newer technologies. These cases may need new Data Governance stories for sprint planning and execution. Once an organization finds out what works over a few sprints, the team can repeat these activities and consistently communicate why and how the workflow helps.

    Conclusion

    Because business environments change rapidly, Data Governance best practices must be adaptable. Gartner has estimated that 80% of organizations will fail to scale digital business because they persist in outdated governance processes. 

    Versatile Data Governance activities require engagement from all levels of the organization and especially sponsorship from executives. Flexibility comes from understanding the purpose behind Data Governance activities and knowing Data Governance capabilities, to be able to use what works to the best extent.

    Data Governance needs implementation through a good framework that includes automation. In addition, any software tools supporting Data Governance need evaluation on how well they match the Data Governance’s purpose. 

    Data Governance best practices must work in iterations to become agile in changing business contexts. Businesses should plan on modifying the Data Governance controls used today as new technologies emerge and business environments evolve.

    Author: Michelle Knight

    Source: Dataversity

  • Navigating the Waves of Generative AI from Tech Leaders' Perspective  

    Navigating the Waves of Generative AI from Tech Leaders' Perspective

    Risky, immature and full of promise, generative AI is forcing tech leaders to assess challenges on many fronts, and in real time. Generative AI is widely regarded as one of the great technology breakthroughs of our time. On the back of thousands of headlines provoked by OpenAI’s ChatGPT, it’s provoked urgent responses from many tech giants and is the theme of, and main topic of discussion at, tech conferences worldwide. But, as with any big new wave, there is a risk of once-promising projects being washed up and there are clear and obvious concerns over governance, quality and security. To cut through the froth, CIO.com polled a range of IT leaders and experts for their views on where we are with generative AI, their hopes and their concerns.

    The state of play

    The storied UK IT chief Paul Coby, now CIO of family property developer Persimmon Homes, has seen many trends come and go but he is bullish on generative AI, even though it only made its first appearance in November 2022. “I believe generative AI is a game changer at a fundamental level,” he says. “I was at a Gartner conference in the US where they called generative AI out as the ‘third digital revolution’ after mainframe computing and the internet. The impact could really be that profound since we have a tool that can be applied to multiple use cases, from writing and designing products, to visualizations, checking code, and so forth.”

    Another experienced IT leader, David Ivell, chief product and technology officer at behavior management training company Team Teach, is already harnessing generative AI’s power.“Generative AI is a key part of our business strategy, facilitating growth with AI-enabled processes already live in production,” he says. “Since the middle of last year, we’ve been analyzing the potential impact, opportunities, and risks of the speed of innovation in this area, as well as introduced policies and implemented measures to minimize risks,” he says. “But overall, we see this as a huge opportunity. We ran workshops with every division of our business, educating them on the accelerating innovation in this area, brainstorming opportunities and risks. We’ve been shortlisting and building out potential proof-of-concept options and modelling revenue impact and already taken one concept through our innovation lab and into live production.”

    Jon Collins of technology analyst firm GigaOm and author of The Technology Garden: Cultivating Sustainable IT-Business Alignment, is both a market watcher and user. “We’re testing ChatGPT at an individual level,” he says. “The potential is highly positive and impactful, particularly as a research tool or one which gives an initial, albeit fully formed, answer. But it’s still to be seen how generative AI replaces, rather than augments, human involvement in terms of information. From a design perspective, such tools are more compelling.”

    Neil Ward-Dutton, VP, AI and Intelligent Process Automation European Practices at IDC, suggests that generative AI usage is high but business strategy may lag. “Generative AI has colossal potential to impact multiple areas of business,” he says. “An IDC survey from March 2023 saw 21% of respondents say they’re already investing in generative AI this year, and a further 55% are exploring potential use cases. In general, we see a small number of organizations using generative AI based on a strategy or plan, shaped by clear policies, and a lot of grassroots experimentation, but that’s almost always happening in a strategy vacuum.”

    What works (and what doesn’t)

    So if projects are already getting off the ground, what are feelings about where generative AI works best, and how? “The best practises are undoubtedly cross-functional collaboration, ‘try before you buy,’ and learn from what you do,” says Marc O’Brien, CIO at radiology healthcare service provider Medica Group. “In my experience, the algorithms from reputable firms do what they say on the tin but what really matters is where you position in the workflow.”

    Team Teach’s Ivell believes companies can gain a fast start by using tools being built into applications and suites. “One of the key and immediate opportunities of generative AI is it’s already being built into some tools we already use, be that Power BI, Adobe or more industry-specific apps,” he says. “To take advantage of these needs some internal discovery or analysis of these new functions, understanding how we’d use them, and, in the first instance, training our staff how to exploit the new features. People tend to use tools in the way they always have, and adoption of new features can be slow, so we need to accelerate that.”

    GigaOm’s Collins is an advocate of the always popular “start small” school of thought. “Governance has to come first, given the answers offered by generative AI solutions come with risks and caveats,” he says. “From experience, text answers can be wrong, misleading, or incomplete, and code answers can be buggy or faulty. Starting small has to be the way forward, given that success with the tooling, at least currently, is often down to the ability to create well-formed questions.”

    Ward-Dutton and IDC agree and add five other points of guidance: focusing on value and functionality, finding specific use cases, understanding limitations, considering the impact on work and jobs, and managing risks such as security, confidentiality, privacy and quality.

    Obstacles and obstructions

    Safety, bias, accuracy, and hallucination continue to be recurring issues. Jon Cosson, head of IT and CISO at wealth management firm JM Finn, recalls asking ChatGPT for his own biography. The system listed only about 70% of his CV, and simply invented a period at a well-known bank. “We need to realize where it can be enormously powerful and where it assists us, but be careful we retain human oversight,” he says. “It’s made my life easier because it allows me to write documents and make them richer, but if you rely on this beast it can bite you. We’re using it selectively in tests to see its power, but it’s heavily monitored and we won’t deploy anything if it causes any adverse decision making.”

    Medica’s O’Brien issues a caution as well. “Within healthcare the regulatory environment and the commercial frameworks are years behind the technology,” he says. “This makes it almost impossible to monetize, and, therefore, fund the implementation and usage of the algorithms. This is true across both public and independent sectors. That said, I believe once these barriers are overcome, benefits-led implementation will be swift.”

    Coby adds that the immature regulatory and legal structures around using generative AI and large language models (LLM) need to be carefully considered, as does the tendency of current programs to hallucinate. “This is why, at this stage, it’s essential that any use is checked by someone with expert knowledge. Moving from PoCs to mainstream implementation will need to be carefully controlled.”

    Ivell adds that generative AI could create unwelcome competitive dynamics. “As part of our preparation of a generative AI strategy, it’s important to understand where this technology could enable competition or startups to use it to attack our market share with new tools producing faster-to-market and lower-cost products or services,” he says. “So there’s a lot to keep aware of—not just how we may exploit it but also keeping an eye on how it’s starting to be used as a threat.”

    And in terms of intellectual property risks, IDC’s Ward-Dutton says oganizations’ own IP can leak into the public domain if they aren’t careful when using public generative AI services. “Some system providers are facing lawsuits because they trained their systems on data and content without getting permission from the original creators,” he says, adding that costs can also be prohibitive because the core technology powering today’s generative AI systems is very expensive to train.

    Searching for the sweet spots

    There are varying opinions where generative AI will make itself most felt. Collins nominates research and design: “It’s perfectly reasonable the challenges of creating a functional website from scratch should go away, as well as other areas that were already ripe for automation.” O’Brien adds it’s anything that produces content for consumption by humans, where regulation is light and pricing can fund the algorithm.

    IDC’s Ward-Dutton says the analyst’s customer panel points to three main clusters: improving customer and employee experiences; bolstering knowledge management; and accelerating software delivery. In time, he predicts, they’ll be joined by enterprise communication (including contact centres); collaboration and knowledge-sharing; content management; and design, research and creative activities.

    Despite being too early to say, Coby believes initial successes will be in enabling humans to be much more productive by using generative AI to produce first drafts and then use them as foundations. “This is likely to be in multiple areas and will require new skills in asking the right queries,” he says.

    Ivell concurs regarding areas of content, code generation, and customer support, but says he’s most excited by research opportunities. “AI can analyze large volumes of data in textual form to create new forms, summaries, and analyses of the data sets,” he says. “It can also provide analysis of large data sets to produce enterprise-level insight previously unavailable such as understanding patterns in behavior and creating insight we can use to build new products.”

    JM Finn’s Cosson, an enthusiastic blogger, says text and graphical content using tools such as Midjourney are obvious near-term opportunities. “It’s already powerful in blog sites and a lot of people will use it as a framework,” he says. “You don’t want to lose the human creative element but you can apply human oversight elements and deliver some outstanding pieces. Where you see downsides are in marketing types and copywriters losing their jobs, but there will be new jobs created.”

    A Trojan horse?

    Some watchers believe that generative AI can be the trailblazer for wider application of AI and ML. IDC’s Ward-Dutton is particularly enthusiastic. “In just a few months, generative AI has simultaneously captured the attention, imagination, and trepidation of tech and business leaders across the world,” he says. “We believe generative AI is a trigger technology that will usher in a new era of computing—the Era of AI Everywhere, which will completely change our relationship with data and how we extract value from both structured and unstructured data. The rapid adoption of generative AI moves AI from an emerging software segment in the stack to a lynch-pin technology at the center of a platform transition.”

    But CIOs are vocal about the importance of robots working in tandem with people. “AI works best when it works together with humans,” says Cosson. “The human brain is still worth something. Empathy and humanity are important and we need to work out how AI complements and fuses them together.”

    Date: August 29, 2023

    Author: Thomas Veitch

    Source: CIO

  • The AI Dilemma: CIO Perspectives on Navigating the Right Path Forward  

    The AI Dilemma: CIO Perspectives on Navigating the Right Path Forward

    Recent advances have highlighted AI’s incomparable potential and not yet fully fathomed risks, placing CIOs in the hot seat for figuring out how best to leverage this increasingly controversial technology in business. Here’s what they have to say about it. 

    With the AI hype cycle and subsequent backlash both in full swing, IT leaders find themselves at a tenuous inflection point regarding use of artificial intelligence in the enterprise. Following stern warnings from Elon Musk and revered AI pioneer Geoffrey Hinton, who recently left Google and is broadcasting AI’s risks and a call to pause, IT leaders are reaching out to institutions, consulting firms, and attorneys across the globe to get advice on the path forward. 

    “The recent cautionary remarks of tech CEOs such as Elon Musk about the potential dangers of artificial intelligence demonstrate that we are not doing enough to mitigate the consequences of our innovation,” says Atti Riazi, SVP and CIO of Hearst. “It is our duty as innovators to innovate responsibly and to understand the implications of technology on human life, society, and culture.”

    That sentiment is echoed by many IT leaders, who believe innovation in a free market society is inevitable and should be encouraged, especially in this era of digital transformation — but only with the right rules and regulations in place to prevent corporate catastrophe or worse. 

    “I agree a pause may be appropriate for some industries or certain high-stake use cases but in many other situations we should be pushing ahead and exploring at speed what opportunities these tools provide,” says Bob McCowan, CIO at Regeneron Pharmaceuticals. “Many board members are questioning if these technologies should be adopted or are they going to create too many risks?” McCowan adds. “I see it as both. Ignore it or shut it down and you will be missing out on significant opportunity, but giving unfettered access [to employees] without controls in place could also put your organization at risk.”

     While AI tools have been in use for years, the recent release of ChatGPT to the masses has stirred up considerably more controversy, giving many CIOs — and their boards — pause on how to proceed. Some CIOs take the risks to industry — and humanity — very seriously.

    “Every day, I worry about this more,” says Steve Randich, CIO of The Financial Industry Regulatory Authority (FINRA), a key regulatory agency that reports to the SEC. Randich notes a graph he saw recently that states that the ‘mental’ capacity of an AI program just exceeded that of a mouse and in 10 years will exceed the capacity of all of humankind. “Consider me concerned, especially if the AI programs can be influenced by bad actors and are able to hack, such as at nuclear codes,” he says.

    George Westerman, a senior lecturer at MIT Sloan School of Management, says executives at enterprises across the globe are reaching out for advice from MIT Sloan and other institutions about the ethics, risks, and potential liabilities of using generative AI. Still, Westerman believes most CIOs have already engaged with their top executives and board of directors and that generative AI itself imposes no new legal liabilities that corporations and their executives don’t abide today.

    “I would expect that just like all other officers of companies that there’s [legal] coverage there for your official duties,” Westerman says of CIOs’ personal legal exposure to AI fallout, noting the exception of using the technology inappropriately for personal gain.

    Playing catchup on generative AI

    Meanwhile, the release of ChatGPT has rattled regulatory oversight efforts. The EU had planned to enact its AI Act last month but opted to stall after ChatGPT was released given that many were concerned the policies would be outdated before going into effect. And as the European Commission and its related governing bodies work to sort out the implications of generative AI, company executives in Europe and the US are taking the warning bells seriously.

    “As AI becomes a key part of our landscape and narrow AI turns into general AI — who becomes liable? The heads of technology, the inanimate machine models? The human interveners ratifying/changing training models? The technology is moving fast, but the controls and ethics around it are not,” says Adriana Karaboutis, group chief information and digital officer at National Grid, which is based in the UK but operates in the northeast US as well. “There is a catchup game here. To this end and in the meantime managing AI in the enterprise lies with CxOs that oversee corporate and organizational risk. CTO/CIO/CTO/CDO/CISOs are no longer the owners of information risk” given the rise of AI, the CIDO maintains. “IT relies on the CEO and all CxOs, which means corporate culture and awareness to the huge benefits of AI as well as the risks must be owned.” 

    Stockholm-based telecom Ericsson sees huge upside in generative AI and is investing in creating multiple generative AI models, including large language models, says Rickard Wieselfors, vice president and head of enterprise automation and AI at Ericsson.

    “There is a sound self-criticism within the AI industry and we are taking responsible AI very seriously,” he says. “There are multiple questions without answer in terms of intellectual property rights to text or source code used in the training. Furthermore, data leakage in querying the models, bias, factual mistakes, lack of completeness, granularity or lack of model accuracy certainly limits what you can use the models for. “With great capability comes great responsibility and we support and participate in the current spirit of self-criticism and philosophical reflections on what AI could bring to the world,” Wieselfors says.

    Some CIOs, such as Choice Hotels’ Brian Kirkland, are monitoring the technology but do not think generative AI is fully ready for commercial use. “I do believe it is important for industry to make sure that they are aware of the risk, reward, and impact of using generative AI technologies, like ChatGPT. There are risks to data ownership and generated content that must be understood and managed to avoid negative impacts to the company,” Kirkland says. “At the same time, there is a lot of upside and opportunity to consider. The upside will be significant when there is an ability to safely and securely merge a private data set with the public data in those systems. “There is going to be a dramatic change in how AI and machine learning enable business value through everything from generated AI content to complex and meaningful business analytics and decision making,” the Choice Hotels CIO says.

    No one is suggesting a total hold on such a powerful and life changing technology.

    In a recent Gartner poll of more than 2,500 executives, 45% indicated that attention around ChatGPT has caused them to increase their AI investments. More than 70% maintain their enterprise is currently exploring generative AI and 19% have pilots or production use under way, with projects from companies such as Unilever and CarMax already showing promise.

    At the MIT Sloan CIO conference starting May 15, Irving Wladawsky-Berger will host a panel on the potential risks and rewards of entering generative AI waters. Recently, he hosted a pre-conference discussion on the technology. “We’re all excited about generative AI today,” said the former longtime IBM researcher and current affiliate researcher at MIT Sloan, citing major advances in genomics expected due to AI. But Wladawsky-Berger noted that the due diligence required of those who adopt the technology will not be a simple task. “It just takes so much work,” he said. “[We must] figure out what works, what is safe, and what trials to do. That’s the part that takes time.”

    Another CIO on the panel, Wafaa Mamilli, chief digital and technology officer at Zoetis, said generative AI is giving pharmaceutical companies increased confidence of curing chronic human illnesses. “Because of the advances of generative AI technologies and computing power on genetic research, there are now trials in the US and outside of the US, Japan, and Europe that are targeting to cure diabetes,” she said.

    Guardrails and guidelines: Generative AI essentials

    Wall Street has more than taken notice of the industry’s swift embrace of generative AI. According to IDC, 2022 was a record-breaking year for investments in generative AI startups, with equity funding exceeding $2.6 billion. “Whether it is content creation with Jasper.ai, image creation with Midjourney, or text processing with Azure OpenAI services, there is a generative AI foundation model to boost various aspects of your business,” according to one of several recent IDC reports on generative AI.

    And CIOs already have the means of putting guardrails in place to securely move forward with generative AI pilots, Regeneron’s McCowan notes. “It’s of critical importance that you have policy and guidelines to manage access and behaviors of those that plan to use the technologies and to remind your staff to protect intellectual property, PII [Personable Identifiable Information], as well as reiterating that what gets shared may become public,” McCowan says. “Get your innovators and your lawyers together to find a risk-based model of using these tools and be clear what data you may expose, and what rights you have to the output from these solutions,” he says. “Start using the technologies with less risky use cases and learn from each iteration. Get started or you will lose out.”

    Forrester Research analyst David Truog notes that AI leaders are right to put the warning label on generative AI before enterprises begin pilots and using generative AI in production. But he too is confident it can be done.    “I don’t think stopping or pausing AI is the right path,” Truog says. “The more pragmatic and constructive path is to be judicious in selecting use cases where specialized AIs can help, embed thoughtful guardrails, and have an intentional air-gapping strategy. That would be a starting point.”

    One DevOps IT chief at a consulting firm points to several ways CIOs may mitigate risk when using generative AI, including thinking like a venture capitalist; clearly understanding the technology’s value; determining ethical and legal considerations in advance of testing; experimenting, but not rushing into investments; and considering the implications from the customer point of view.

    “Smart CIOs will form oversight committees or partner with outside consultants who can guide the organization through the implementation and help set up guidelines to promote responsible use,” says Rod Cope, CTO at Minneapolis-based Perforce.  “While investing in AI provides tremendous value for the enterprise, implementing it into your tech stack requires thoughtful consideration to protect you, your organization, and your customers.”

    While the rise of generative AI will certainly impact human jobs, some IT leaders, such as Ed Fox, CTO at managed services provider MetTel, believe the fallout may be exaggerated, although everyone will likely have to adapt or fall behind. “Some people will lose jobs during this awakening of generative AI but not to the extent some are forecasting,” Fox says. “Those of us that don’t embrace the real-time encyclopedia will be passed by.”

    Still, if there’s one theme for certain it’s that for most CIOs proceeding with caution is the best path forward. So too is getting involved.

    CIOs must strike a balance between “strict regulations that stifle innovation and guidelines to ensure that AI is developed and used responsibly,” says Tom Richer, general manager of Wipro’s Google Business Group, noting he is collaborating with his alma mater, Cornell, and its AI Initiative, to proceed prudently. “It’s vital for CIOs and IT executives to be aware of the potential risks and benefits of generative AI and to work with experts in the field to develop responsible strategies for its use,” Richer says. “This collaboration needs to involve universities, big tech, think tanks, and government research centers to develop best practices and guidelines for the development and deployment of AI technologies.”

    Author: Paula Rooney

    Source: CIO

EasyTagCloud v2.8