Artificial intelligence has now become a common topic discussed in the technology industry. The technology has been applied to Gmail, autonomous vehicles and photo finishing, and Mark Zuckerberg even plans to develop an artificial intelligence butler. The problem is that the concept of artificial intelligence is a bit too sci-fi, it is always reminiscent of a supercomputer that manipulates spacecraft, not a particularly smart spam filter. As a result, people have begun to worry about when artificial intelligence will rebel and rule humans.
To a certain extent, technology companies also encourage people to ignore the gap between artificial intelligence and sci-fi artificial intelligence, but when you try to understand what computers do, it's easy to understand the difference. This article is about the most common application direction of artificial intelligence in the field of consumer applications, the limitations of current technology, and why we don't have to worry about the rise of robots.
What is neural network, machine learning and deep learning?
These three terms now have a fairly high frequency of occurrence. They can be seen as three different levels: the neural network is at the bottom, it is the computer structure that builds artificial intelligence; machine learning is the next layer, it is a program that can run on the neural network, can train the computer in the data Finding specific answers; deep learning is at the top, a feature-type machine learning that has only gained popularity in the last 10 years, and its popularity has been largely due to cheap processing performance and Internet data.
The concept of neural networks dates back to the beginning of artificial intelligence in the 1950s. Simply put, it's a way of building a computer, making it look like a cartoonized brain, with nerve-like nodes connected into a network. These nodes are very stupid and can only answer the most basic questions. Once combined, they can solve complex problems. More importantly, with the right algorithms, they can also have learning capabilities.
Ernest Davis, a professor of computer science at New York University, said that if you want computers to learn how to cross the road, in traditional programming, you need to give him a very specific set of rules, telling it how to look around, waiting for the vehicle, using zebra crossings, etc. And let it try. In the face of machine learning, you only need to show it 10,000 videos that cross the road safely (and 10,000 videos that have been hit by cars).
In this case, how to let the computer absorb all the information in the video is a big difficulty. In the past few decades, researchers have tried various methods to teach computers, including enhanced learning and genetic algorithms. The former requires you to reward the computer when it reaches its goal to gradually optimize the solution; the latter will compare the different ways of solving the problem in an analogous way.
In today's computer world, there is a teaching method that becomes especially useful, that is, deep learning. It is a type of machine learning that uses different layers of neural networks to analyze data in different abstractions. When the deep learning system faces a painting, each layer of the neural network will be magnified to varying degrees. The bottom layer might focus on a 5x5 pixel grid and then determine if something is there. If there is one, the layer above it will begin to see how the mesh fits into the larger pattern. This process will gradually accumulate, allowing software to use a step-by-step approach to understand even the most complex data.
Next, suppose we want to use deep learning to let the computer know what the cat looks like. We first need to use different aspects of the neural network to identify the different elements of the cat: paws, soles and beards. Then, the neural network will look at a lot of pictures of cats and other animals and be told which cats are not. Over time, it will remember which layers are important and enhance or ignore some of them. For example, it may find that paws and cats have strong associations, but they also appear on other animals, so it knows to look for claws and beards at the same time.
This is a long and repetitive process, and the system slowly improves its ability based on feedback. In this process, humans can correct the computer, and if the network itself has enough tag data, it can also test itself to see how to use all aspects of itself to produce the most accurate results. Recognizing a cat has been so difficult, and the complexity of the systems that need to identify everything in the world can be imagined. That's why Microsoft is launching an app to identify different breeds of dogs. In our human view, the difference between Dubin and Schnauzer may be very obvious, but computers need to define a large number of differences before they can distinguish between the two.
Is this the technology used by Google, Facebook and other companies?
This is true in general.
Deep learning technology has now been applied to a variety of everyday tasks. Many large companies have their own AI departments, and Facebook and Google have also published their research results through software open source. Google even launched a free online course for 3 months to introduce deep learning. Academic researchers may be able to conduct research in a relatively low-key way, but these companies are launching innovative applications for the technology almost every week, from Microsoft's emotional identity web application to Google's surreal DeepDream image. This is why we have recently seen information about deep learning frequently: large consumer technology companies are screaming about this technology and sharing their strange work.
However, while deep learning has excellent capabilities in speech and image recognition, it also has considerable limitations. This technology not only requires a lot of data and fine-tuning, but their intelligence is also narrow and fragile. As cognitive psychologist GaryMarcus puts it, this popular technique “lacks the way of causality (such as disease and symptoms) and has difficulty in learning abstract concepts. It cannot perform logical reasoning, integrating abstract knowledge ( For example, the name, use and usage of an object have a long way to go. In other words, deep learning does not have any common sense.
For example, in a Google research project, they first presented a sample of the dumbbell to the neural network, and then let it generate a separate image. From the results, the image generated by the neural network is not bad: a horizontal grip connects two gray rings. But the outline of the arm muscles always appears in the middle of the grip, because the images used in training usually have fitness enthusiasts holding dumbbells. Deep learning may reveal the basic visual attributes of dumbbells in thousands of pictures, but it can never make a cognitive leap, recognizing that dumbbells do not have long arms. This kind of problem does not exist only in the scope of common sense. Deep learning networks can also be fooled by random pixel patterns due to the specific way in which the data is examined.
However, this limitation can be subtly hidden. Take the digital assistants like Siri as an example. They can often understand the user's commands, or shake the little smart. But as computer scientist Hector Levesque points out, these little tricks just show a huge gap between artificial intelligence and true intelligence. He mentioned the Turing test, saying that machines that achieved excellent results in this challenge would use small tricks to make people think it was talking to themselves. They use jokes, citations, emotional outbursts, misguided and all language avoidance to confuse and interfere with the questioner. Yes, the computer that passed the Turing test last year claimed to be a 13-year-old Ukrainian boy, which became an excuse for its occasional ignorance.
Levesque believes that a better way to test artificial intelligence is to ask the computer a surreal but logically sensible question that requires extensive causal knowledge to answer, such as "Can a crocodile participate in a cross-country obstacle?" or "a baseball player can Is there a small wing on the hat?" It is conceivable how much knowledge the computer needs before trying to answer these questions.
If this is not artificial intelligence, what is it?
This is one of the difficulties in using the term artificial intelligence: it is too difficult to define. The industry's consensus is that as long as the machine completes a task that only humans could do before—such as playing chess or recognizing a face—it is no longer considered a sign of intelligence. As computer scientist Larry Tesler said, intelligence refers to things that machines can't do. And even if computers can do certain tasks, they can't replace human intelligence. "We say that neural networks are like human brains, but that's not the case," said Yann LeCun, director of Facebook's artificial intelligence research team. "It's like airplanes are not birds. They can't flap their wings, they don't have feathers or muscles." If we really want to create With artificial intelligence, it also "does not be like the wisdom of humans or animals." For example, it is difficult for us to imagine a sense of wisdom without self-protection.
Many industry insiders in the AI ​​field don't think we're creating artificial intelligence with real perceptual power. "The current approach can't (allow artificial intelligence) flexibility, or the ability to handle multitasking and tasks outside of programming," said Professor Andrei Barbu of the MIT Brain, Mind and Machine Center. He also mentioned that efficient AI search is simply about creating systems that are fine-tuned to solve specific problems. Although researchers have tried unsupervised machine learning—allowing the system to observe unclassified and tagged data—but this is still at a very early stage. Google has a similar neural network project that randomly looks at the thumbnails of 10 million videos by randomly observing the appearance of its own cat, but the producer has not announced any other capabilities. As LeCun said at an event at the Orange Institute two years ago: "We don't know how to conduct unsupervised learning, which is the biggest obstacle."
As a research project, artificial intelligence is often subject to exaggerated publicity. When a new method is discovered and research and development progresses, reviewers (and often computer scientists) will boldly assume that this rate of development will quickly make robot housekeeping a reality. The New York Times had a similar report as early as 1958, in which a very early form of AI, which can distinguish between left and right, was described as an electronic "embryo", and one day in the future it would be able to "walk, talk, Observing, writing, breeding, and self-presence. When such commitments are not fulfilled, the field will fall into the so-called AI winter, which is the period of pessimism and capital reduction. There have been more than a dozen small-scale AI winters in history, and there were two major cold winters in the late 1970s and early 1990s. Although every field of research will experience a similar period, it is worth noting that few disciplines are so “reliable†that their followers are disappointed again and again, and the latter has come up with a proper noun.
Is artificial intelligence just a trick for gimmicks and fools?
This is a bit of a partiality. How to look at artificial intelligence depends on what you expect from it. Our machines are getting smarter, but that's not something we can easily classify. Take Tesla's autopilot software as an example. The company's president, Elon Musk, described it as a "quick learning network" that brings together data so that all of its cars can learn at the same time. The ultimate goal of this research project will not be general artificial intelligence, but its entire computer network does have a fairly high level of intelligence, which is what LeCun calls "invisible intelligence."
Imagine that in the future you will have a self-driving car that will not go wrong, and there is also an advanced digital assistant. This may be the kind of deceptive trick that Professor Levesque can't see, but it can be treated by anyone as a human. You will talk to each other in the morning when you go to work, talk about the news, arrange your own itinerary, or change your destination when you need it – everything will be in this self-driving car that not only understands the rules of the road but also handles other vehicles. get on. At that time, we really still care about whether this artificial intelligence is true or not?
Aluminum Polishing Sheet
Aluminum Polishing Sheet is a special Aluminum Sheet.Its surface has been polished hence gives good performance in surface condition.The advantage of aluminum sheet is as follows:
1. Easy to process.
The results show that the casting aluminum alloy with good casting property or deformation aluminum alloy with good processing plasticity can be obtained by adding certain Ba Au Du element.
2. Good conductivity and thermal conductivity.
The electrical and thermal conductivity of aluminum is second only to silver, copper and gold.
3. The density is small.
The density of aluminum and aluminum alloy is close to 2.7 g / about 1 / 3 of that of iron or copper.
4. High strength.
The strength of aluminum and aluminum alloy is high. The matrix strength can be strengthened by cold working to a certain extent, and some brands of aluminum alloy can also be strengthened by heat treatment.
5. Good corrosion resistance.
The surface of aluminum is easy to produce a dense and firm protective film of Al2O3, which can protect the substrate from corrosion.