4 items tagged "algorithms"

  • Lessons From The U.S. Election On Big Data And Algorithms

    The failure to accurately predict the outcome of the elections has caused some backlash against big data and algorithms. This is misguided. The real issue is failure to build unbiased models that will identify trends that do not fit neatly into our present understanding. This is one of the most urgent challenges for big data, advanced analytics and algorithms.  When speaking with retailers on this subject I focus on two important considerations.  The first is that convergence of what we believe to be true and what is actually true is getting smaller.

    things-you-know-300x179

    This is because people, consumers, have more personal control than ever before.  They source opinions from the web, social media, groups and associations that in the past where not available to them.  For retailers this is critical because the historical view that the merchandising or marketing group holds about consumers is likely growing increasingly out of date.  Yet well meaning business people performing these tasks continue to disregard indicators and repeat the same actions.  Before consumers had so many options this was not a huge problem since change happened more slowly.  Today if you fail to catch a trend there are tens or hundreds of other companies out there ready to capitalize on the opportunity.  While it is difficult to accept, business people must learn a new skill, leveraging analytics to improve their instincts.

    The second is closely related to the first but with an important distinction; go where the data leads. I describe this as the KISS that connects big data to decisions.
    The KISS is about extracting knowledge, testing innovations, developing strategies, and doing all this at high speed. The KISS is what allows the organization to safely travel down the path of discovery – going where the data leads – without falling down a rabbit hole.
    KISS1-300x164
    Getting back to the election prognosticators, there were a few that did identify the trend.  They were repeatedly laughed at and disregarded. This is the foundation of the problem, organizations must foster environments where new ideas are embraced and safely explored.  This is how we will grow the convergence of things we know. 
     
    Source: Gartner, November 10, 2016
  • The Essence of Data Annotation in Machine Learning

    The Essence of Data Annotation in Machine Learning

    Data annotation in machine learning is a term used to describe the process of labeling data in a way that machines can understand, either through computer vision or natural language processing (NLP). Another way, data labeling enables the machine learning model to perceive its surroundings, make judgments, and take action.

    When developing an ML model, data scientists employ many datasets, carefully adapting them to the model’s training requirements. As a result, robots can detect material that has been tagged in a variety of intelligible formats, such as images, texts, and videos.

    This is why AI and machine learning businesses are looking for annotated data and annotation service to put into their algorithms, training them to learn and detect recurrent patterns and then using the information to generate exact estimates and forecasts.

    Why is Data Annotation Important in Machine Learning?

    These things are made possible by data annotation machine learning, whether search engines can increase the quality of their results, improve facial recognition software, or build self-driving cars. Google’s ability to provide results depending on a user’s geographic area or sex, Samsung and Apple’s usage of face unlocking software to increase the security of their devices, Tesla’s introduction of semi-autonomous self-driving vehicles, and so on are all living examples.

    Annotated data and annotation service is useful in machine learning for making accurate predictions and estimates in our daily lives. Machines may notice recurrent patterns, make choices, and take action as a result, as previously stated.

    In other words, robots are presented with intelligible ways and instructed what to search for – whether it’s in the form of an image, video, text, or audio. There is no limit to how many comparable patterns a trained machine learning algorithm may identify in new datasets.

    Latest Trends

    Tools that can automatically discover and name things based on comparable hand annotation are known as predictive annotation tools. These technologies may annotate successive frames after the initial few frames are manually tagged in computer vision processes. When selecting a data annotation company, the new significant differentiation is human creativity, which is still necessary for QA and edge cases.

    Reporting that is tailored to you. Working with big expert data annotation teams, project progress reporting will become more granular at the individual level and dynamic, thanks to APIs and open source technologies. Throughout the project’s lifespan, this will enable informed decision-making.

    Concentrate on quality assurance. When dealing with enormous data sets, teams will be formed that focus only on edge cases and quality control and consist of specialists who have a thorough grasp of the data and its subject matter. They will be able to work without precise instructions and laser focus on detecting and correcting errors in large-scale datasets.

    Small- and medium-sized enterprises (SMEs) have a workforce. As more sectors use AI, the demand for subject-specific data annotation teams will grow in healthcare, finance, and government. From the confirmation of guidelines through the moment of data delivery, the experienced data labeler’s focused yet thorough approach provides value to the annotation process.

    Conclusion

    Data annotation is essential to machine learning and has contributed to some of the cutting-edge technology we have today. Data annotators and annotation company, or the unseen employees in the machine learning industry, are needed today more than ever. The AI and ML industries’ overall success is dependent on the continuing generation of nuanced datasets required to solve some of ML’s most challenging issues.

    Annotated data in photos, videos, or texts is the best “fuel” for training ML algorithms, and this is how we get to some of the most autonomous ML models we can potentially and proudly have.

    Author: Rayan Potter

    Source: Datafloq

  • What about the relation between AI and machine learning?

    Artificial intelligenceartificial intelligence machine learning is one of the most compelling areas of computer science research. AI technologies have gone through periods of innovation and growth, but never has AI research and development seemed as promising as it does now. This is due in part to amazing developments within machine learning, deep learning, and neural networks.

    Machine learning, a cutting-edge branch of artificial intelligence, is propelling the AI field further than ever before. While AI assistants like Siri, Cortana, and Bixby are useful, if not amusing, applications of AI, they lack the ability to learn, self-correct, and self-improve. 

    They are unable to operate outside of their code, learn independently, and apply past experiences to new problems. Machine learning is changing that. Machines are able to grow outside their original code which allows them to mimic the cognitive processes of the human mind.

    Why is machine learning important for AI? As you have most likely already gathered, machine learning is the branch of AI dedicated to endowing machines with the ability to learn. While there are programs that help sort your email, provide you with personalized recommendations based on your online shopping behavior, and make playlists based on music you like, these programs lack the ability to truly think for themselves. 

    These “weak AI” programs are able to analyze data well and conjure up impressive responses, they are far cry from true artificial intelligence. The only way to arrive at anything close to true artificial intelligence would require a machine to learn. A machine with true artificial intelligence, also known as artificial general intelligence, would be aware of its environment and would manipulate that environment to achieve its goals. A machine with artificial general intelligence would be no different from a human, who is aware of his or her surroundings and uses that awareness to arrive at solutions to problems occurring within those surroundings.

    You may be familiar with the infamous AlphaGo program that beat a professional Go player in 2016 to the chagrin of many professional Go players. While AI has been able to beat chess players in the past, the AI win came as an incredible shock to Go players and AI researchers alike. Surpassing Go players was previously thought to be impossible given that each move in the ancient has almost infinite permutations. Decisions in Go are so intricate and complex that it was thought that the game required human intuition. As it so happens, Go does not require human intuition, it only requires general-purpose learning algorithms.

    How were these general-purpose learning algorithms crafted? The AlphaGo program was created DeepMind Technologies, an AI company acquired by Google in 2014 that managed to create a neural network as well as a model that allowed for machines to mimic short-term memory utilizing researchers as well as C++, Lua, and Python developers. The neural network and the short-term memory model are applications of deep learning, a cutting-edge branch of machine learning.

    Deep learning is an approach to machine learning in which software emulates the human brain. Currently, machine learning applications allow for a machine to train in a certain task by analyzing examples of that task. Deep learning allows for machines to learn in a more general way. So, instead of simply mimicking cognitive functioning in a predefined task, machines are endowed with what can be thought of as a sort of artificial brain. This artificial brain is called a artificial neural network, or neural net for short.

    There are several neural net models in use today, and all use mathematics to copy the structure of the human brain. Neural nets are divided into layers, and consist of thousands, sometimes millions, of interconnected processing nodes. Connections between nodes is given a weight. If the weight is over a predefined threshold, then the node’s data is sent through the next layer. These nodes act as artificial neurons, sharing clusters of data and storing experience and knowledge based on that data, and firing off new bits of information. These nodes interact dynamically and change thresholds and weights as they learn from experience.

    Machine learning and deep learning are exciting and alarming areas of research within AI. Endowing machines with the ability to learn certain tasks could be extremely useful, could increase productivity, and help expedite all sorts of activities, from search algorithms to data mining. Deep learning provides even more opportunities for AI’s growth. As researchers delve deeper into deep learning, we could see machines that understand the mechanics behind learning itself, rather than simply mimicking intellectual tasks

    Author: Greg Robinson

    Source: Information Management

  • Why communication on algorithms matters

    Why communication on algorithms matters

    The models you create have real-world applications that affect how your colleagues do their jobs. That means they need to understand what you’ve created, how it works, and what its limitations are. They can’t do any of these things if it’s all one big mystery they don’t understand.

    'I’m afraid I can’t let you do that, Dave… This mission is too important for me to let you jeopardize it'

    Ever since the spectacular 2001: A Space Odyssey became the most-watched movie of 1968, humans have both been fascinated and frightened by the idea of giving AI and machine learning algorithms free rein. 

    In Kubrick’s classic, a logically infallible, sentient supercomputer called HAL is tasked with guiding a mission to Jupiter. When it deems the humans on board to be detrimental to the mission, HAL starts to kill them.

    This is an extreme example, but the caution is far from misplaced. As we’ll explore in this article, time and again, we see situations where algorithms 'just doing their job' overlook needs or red flags they weren’t programmed to recognize. 

    This is bad news for people and companies affected by AI and ML gone wrong. But it’s also bad news for the organizations that shun the transformative potential of machine learning algorithms out of fear and distrust. 

    Getting to grips with the issue is vital for any CEO or department head that wants to succeed in the marketplace. As a data scientist, it’s your job to enlighten them.

    Algorithms aren't just for data scientists

    To start with, it’s important to remember, always, what you’re actually using AI and ML-backed models for. Presumably, it’s to help extract insights and establish patterns in order to answer critical questions about the health of your organization. To create better ways of predicting where things are headed and to make your business’ operations, processes, and budget allocations more efficient, no matter the industry.

    In other words, you aren’t creating clever algorithms because it’s a fun scientific challenge. You’re creating things with real-world applications that affect how your colleagues do their jobs. That means they need to understand what you’ve created, how this works and what its limitations are. They need to be able to ask you nuanced questions and raise concerns.

    They can’t do any of these things if the whole thing is one big mystery they don’t understand. 

    When machine learning algorithms get it wrong

    At other times, algorithms may contain inherent biases that distort predictions and lead to unfair and unhelpful decisions. Just take the case of this racist sentencing scandal in the U.S., where petty criminals were rated more likely to re-offend based on the color of their skin, rather than the severity or frequency of the crime. 

    In a corporate context, the negative fallout of biases in your AI and ML models may be less dramatic, but they can still be harmful to your business or even your customers. For example, your marketing efforts might exclude certain demographics, to your detriment and theirs. Or that you deny credit plans to customers who deserve them, simply because they share irrelevant characteristics with people who don’t. To stop these kinds of things from happening, your non-technical colleagues need to understand how the algorithm is constructed, in simple terms, enough to challenge your rationale. Otherwise, they may end up with misleading results.

    Applying constraints to AI and ML models

    One important way forward is for data scientists to collaborate with business teams when deciding what constraints to apply to algorithms.

    Take the 2001: A Space Odyssey example. The problem here wasn’t that the ship used a powerful, deep learning AI program to solve logistical problems, predict outcomes, and counter human errors in order to get the ship to Jupiter. The problem was that the machine learning algorithm created with this single mission in mind had no constraints. It was designed to achieve the mission in the most effective way using any means necessary, preserving human life was not wired in as a priority.

    Now imagine how a similar approach might pan out in a more mundane business context. 

    Let’s say you build an algorithm in a data science platform to help you source the most cost-effective supplies of a particular material used in one of your best-loved products. The resulting system scours the web and orders the cheapest available option that meets the description. Suspiciously cheap, in fact, which you would discover if you were to ask someone from the procurement or R&D team. But without these conversations, you don’t know to enter constraints on the lower limit or source of the product. The material turns out to be counterfeit, and an entire production run is ruined.

    How data scientists can communicate better on algorithms

    Most people who aren’t data scientists find talking about the mechanisms of AI and ML very daunting. After all, it’s a complex discipline, that’s why you’re in such high demand. But just because something is tricky at a granular level, doesn’t mean you can’t talk about it in simple terms.

    The key is to engage everyone who will& use the model as early as possible in its development. Talk to your colleagues about how they’ll use the model and what they need from it. Discuss other priorities and concerns that affect the construction of the algorithm and the constraints you implement. Explain exactly how the results can be used to inform their decision-making but also where they may want to intervene with human judgment. Make it clear that your door is always open and the project will evolve over time, you can keep tweaking if it’s not perfect.

    Bear in mind that people will be far more confident about using the results of your algorithms if they can tweak the outcome and adjust parameters themselves. Try to find solutions that give individual people that kind of autonomy. That way, if their instincts tell them something’s wrong, they can explore this further instead of either disregarding the algorithm or ignoring potentially valid concerns.

    Final thoughts: shaping the future of AI

    As Professor Hannah Fry, author of Hello World: How to be human in the age of the machine,  explained in an interview with the Economist:

    'If you design an algorithm to tell you the answer but expect the human to double-check it, question it, and know when to override it, you’re essentially creating a recipe for disaster. It’s just not something we’re going to be very good at.

    But if you design your algorithms to wear their uncertainty proudly front and center, to be open and honest with their users about how they came to their decision and all of the messiness and ambiguity it had to cut through to get there, then it’s much easier to know when we should trust our own instincts instead'.In other words, if data scientists encourage colleagues to trust implicitly in the HAL-like, infallible wisdom of their algorithms, not only will this lead to problems, it will also undermine trust in AI and ML in the future. 

    Instead, you need to have clear, frank, honest conversations with your colleagues about the potential and limitations of the technology and the responsibilities of those that use it, and you need to do that in a language they understand.

    Author: Shelby Blitz

    Source: Dataconomy

EasyTagCloud v2.8