Prediction and Prospect of AI field in 2019

2022-10-23
  • Detail

Prediction and Prospect of artificial intelligence in 2019

[Yi intelligent news, January 2] what will happen in the field of artificial intelligence in 2019? What will be the changes compared with previous years

artificial intelligence is leading the innovation of global enterprises - Shandong Hongsheng new materials company plans to put the galvanizing production line into operation in May, whether it is a large enterprise group or a young start-up. According to the Market Research Report "looking at the AI market from technology and vertical industries - global opportunity analysis and industry forecast", from 2018 to 2025, the scale of the global AI market is expected to grow from $4.065 billion in 2016 to $169.411 billion, with a compound annual growth rate of 55.6%. The report divides the AI market by technology, industry verticals and regions. AI technology is subdivided into machine learning, natural language processing, image processing and speech recognition. In 2016, machine learning dominated the AI market in terms of revenue. Thanks to the increased demand for solutions in the AI industry, this trend is expected to continue in the next few years. According to statista, the largest part of revenue comes from artificial intelligence for the enterprise application market

the following is the prediction of the field of artificial intelligence in 2019:

IBM, Google, Microsoft, Amazon and machine learning API providers will release more inclusive data sets to deal with the discrimination and Prejudice embedded in artificial intelligence.

machine learning is the main form of artificial intelligence, which has been successfully applied to many different fields, such as voice recognition on Amazon's intelligent assistant Alexa, Face recognition with Facebook's automatic photo tagging function, pedestrian detection in driverless cars, and even deciding to show you shoe advertisements based on your records of visiting e-commerce stations. In machine learning, decisions are learned from existing data records of human decisions and labels. Therefore, in order to make the computer distinguish dogs and cats, we showed it many images of marked dogs and many images of marked cats, and let it learn the difference between the two. This seemingly harmless method itself brings a serious problem - prejudice. If we blindly input human marks and decisions into the computer, the computer may completely copy our prejudices. The notorious Microsoft Tay robot is a warning

worse, but more subtle, the bias from the data itself does not represent the broad group we want to know. For example, earlier this year, a study by joy buolawumi and timnit gebru showed that in the task of classifying a person's gender, mainstream commercial computer vision products performed best when implanted with images of light skinned men and worst when implanted with images of dark skinned women. If the data set we use to train these classifiers does not contain enough correctly labeled colored people, and does not capture broader cultural differences (no matter where they come from), this will be a huge problem

the machine learning model trained on these non inclusive data sets makes decisions about people with insufficient samples, which is obviously flawed. In 2019, we will see large companies with mainstream computer vision products publicly publish more inclusive data sets. These data sets will become more balanced in geography, race, gender, cultural concepts and other dimensions, and their public release will also drive researchers to conduct research to minimize the bias of artificial intelligence

as products that make AI decisions easier to explain gradually become the mainstream, AI will be more used in the field of medical and financial services

when AI makes easy to explain decisions based on algorithms, life is much simpler. For example, the algorithm first knows whether you have a headache, then looks at whether you have a fever, and then comes to the conclusion that you have the flu. This process can be explained. As long as how the algorithm makes decisions can be explained, whether its prediction is right or wrong, it has great value

in the field of medicine, where we may use machines to make life and death decisions, it is obviously very important to trace back and understand why machines give specific action suggestions. This is also crucial in areas such as finance. If the AI algorithm refuses to provide a loan to someone, Cui Lixin said that it is necessary to understand the reason - especially to ensure that it does not exist unjustified discrimination. As AI becomes more and more successful, it relies more on a technology called "deep learning", which uses many neural networks (so its name has the word "depth"). In these systems, there is no clear way to explain what happened and why the machine made decisions. This system is like an extremely accurate black box, which can receive a series of symptoms, measurement data, images, and patient status and medical history data, and can output highly accurate diagnostic results

for example, Google AI can predict whether you are at risk of heart disease by checking your eyes! What's wrong with your eyes opening the oil delivery valve? No one will easily think that there is something wrong with their eyes! In 2019, as start-ups and large companies seek to promote the adoption of artificial intelligence in financial and medical industries, there will be business support systems specifically for these industries to help us reflect on the deep neural network and better explain the prediction of artificial intelligence. Enterprises will try to fully automate the interpretation process of these predictions, but successful practice 1 The experiment of lubricant bearing capacity measurement method will enable humans to investigate and explore the "black box" and better understand its decision-making, so that humans behind the machine can put forward their own explanations

algorithm vs algorithm. In addition to "fake", artificial intelligence systems in other fields will be attacked based on artificial intelligence

with the continuous progress of technology to generate realistic false images and videos, as well as the emergence of new methods to deceive machine learning algorithms (such as fake) -- autonomous vehicle and other mission critical systems will face new security problems. So far, public attention has mainly focused on images, video and audio - in general, there is a flood of "fake media" and "fake" - but in 2019, we will see a demonstration of some kind of attack: the production of convincing but false structured and unstructured text data, leading to machine errors in the automatic decision-making of some problems, such as credit scoring and extracting data from files

migration learning and simulation have become the mainstream, helping enterprises overcome cold start problems and avoid high training data accumulation costs.

the success of most AI projects depends to a large extent on whether they have high-quality, tagged data. Most projects die from this problem because they usually do not have ready-made data about the problem at hand, or it is difficult to manually mark all existing data

for example, even simple things like predicting whether customers will buy products will encounter cold start problems when there are no customers at first. If your business has not been able to grow, then you will never get the "big data" that may be necessary to use the most powerful technology. Worse, in situations that require expertise (for example, labeling tumors), the cost of obtaining thousands of data markers is extremely high

an active area of AI research is how to deal with this challenge. With only a small amount of data, how can we use powerful deep learning technology? In 2019, two methods will be more adopted in enterprises. The first effective method is transfer learning - a model learned from a field with a large amount of data is used to retrain the machine to learn in another field with much less data. For example, landing AI? The defect of the target object on the production line can be detected by using only a few examples of defective products. Now anyone can start with a model that has learned a lot about images from a large data set such as Imagenet, and train a special object classifier (such as distinguishing damaged cars or houses, and automatically processing insurance). These fields do not have to be based on the same data type. Researchers use models learned from image databases to train classifiers and obtain sensor data

the second method is synthetic data generation and simulation. Generative countermeasures allow us to create very realistic data. As we all know, NVIDIA has used generative countermeasures to generate virtual but very eye-catching celebrity faces. Autonomous vehicle companies have also created virtual simulation scenes in which they can train their driving algorithms at a greater distance than in real life. For example, the waymo driverless car has traveled 5billion miles in the simulation and only eight miles on the real world road. In 2019, enterprises will use simulation, virtual reality and synthetic data to make great progress in machine learning. In the past, due to the limitations of data, this was impossible

more and more privacy requirements will promote more artificial intelligence to occur on edge devices, and large Internet giants will invest in edge artificial intelligence to gain competitive advantage

as consumers become more and more alert to handing all their data to large Internet companies, enterprises that can provide services that do not need to upload data to the cloud will enjoy a competitive advantage. The industry generally believes that products and services must use the cloud to carry out expensive machine learning operations such as face recognition and speech recognition, but the progress of hardware and the enhancement of people's awareness of privacy protection will promote more machine learning operations to occur directly on and smaller edge devices, thereby reducing the need to send potentially sensitive data to the central server. This trend is still in its early stages. Companies such as apple carry out intelligent processing on mobile devices (running machine learning models), rather than on the cloud (for example, using coreml and its dedicated neural engine chip, Google has also announced the launch of TPU edge products). In 2019, this trend will accelerate, and mobile, smart home and IOT ecosystem will promote machine learning on edge devices. (compiled/Le Bang)

Copyright © 2011 JIN SHI