What’s Artificial Intelligence Done in the Last 20 Years?
What’s Artificial Intelligence Done in the Last 20 Years?
2000: Designed as an artificially intelligent human robot, Honda's ASIMO robot could walk as fast as a human and present trays to customers in a restaurant environment.
2006: Geoffrey Hinton came up with the definition of multi-learning, a term that was the first look at deep learning.
2007: Fei Fei Li and his colleagues at Princeton University began creating ImageNet, a large database of description images designed to help research visual object recognition software.
2009: Google began developing driverless vehicles. In 2014, it became the first self-driving car in the U.S. state in Nevada.
2012: Apple introduced Siri and started using it on their devices. Siri has been part of the iOS operating system since iOS 5. All Apple devices manufactured in October 2012 and later have Siri. Although Siri is very successful in understanding and responding to words, studies are still ongoing in understanding emotions and thoughts from the tone of voice.
2013: Facebook assembles a dedicated team, including former Google employee deep learning expert Marc'Aurelio Ranzato and Yaniv Taigman, founder of Face.com, the facial recognition technology company he bought. It was planned that this new group would understand and make sense of the posts and likes of 700 million people on Facebook with the development of artificial intelligence, which would begin with the examination of the processes of brain cells. With this study, the deep learning process has been accelerated. In the simplest terms, Facebook would know who it belonged to, even if the photo was not tagged by the user.
2014: Google bought the artificial intelligence company Deepmind. This development has been one of Google's most important recent moves to advance in artificial intelligence and robotics. The reason for this $400 million deal was to design general purpose learning algorithms by engaging machine learning and neural networks. The acquisition of Deepmind is also seen as Google's move to incorporate high-level talents in artificial intelligence.
2014: Facebook's new technology has reached the level of human intelligence in facial recognition. When you show two separate photos of a stranger, the rate of people matching it correctly is 97.53 percent. The score of the software newly developed by Facebook researchers in the same test is 97.25 percent. The new software was trained using the "largest database" ever created, consisting of 4 million face images of 4,000 people. With the development, the error rate of Facebook's current technology was reduced by 25 percent and it was really close to the human experience.
2015: University of Maryland Computer Engineering Students developed robots that learn to cook by watching videos on YouTube. The study found that robots had 79 percent overall recognition of objects, 91 percent recognized the way they hold objects and predict actions with 83 percent accuracy.
2015: The initiative of the team that founded Google Project Wing, Skydio, announced that it has developed 'artificial intelligence' for drones.
2015: Google develops artificial intelligence technology that learns and masters how to play video games by itself.
2015: With "machine teaching tool" technology, everyone has started to build a system where they can train computers within their own areas of expertise and knowledge. For example, a chef would be able to teach the computer the tricks of a dish and how to prepare it, also a doctor could transfer his knowledge in the medical field to the computer.
2015: A robot project was carried out by researchers at ETH Zurich (Swiss Federal Institute of Technology) and The University of Cambridge, which was able to learn and improve itself. The team has produced a prototype of a "mother" robot that can improve itself and improve its performance without human intervention. This “mother” robot has produced 10 “baby” robots made of plastic and each one with a motor inside, and by following the performance of the babies it has produced in the process, it has made the newly produced baby more “skillful”.
2016: Apple acquires Emotient, an artificial intelligence startup that recognizes real-time emotion. Emotient, which detects general information such as gender and age of customers in real time without violating privacy rules, enables real-time targeting on digital screens. For example, if the person looking at the screen is a young male customer, he may see a razor blade advertisement.
2017: NVIDIA developed a neural network-based system called PilotNet that learns to drive a car by observing people. Not content with this system, NVIDIA was able to develop a method that tells what the priority order is when making driving decisions. The network shares the priority order with the administrators according to this method.
2017: Deep Mind researchers have been able to add memory to artificial intelligence with a new algorithm they have developed.
2017: Google releases 'machine learning' API that recognizes video content and makes it searchable.
2017: Researchers at the University of Nottingham have developed an artificial intelligence system that scans routine medical data to predict which patients will have a stroke or heart attack within 10 years. The artificial intelligence system proved to be much more successful against traditional methods, detecting 355 cases more accurately than doctors trying to make predictions with traditional methods.
2018: A blockchain-based artificial intelligence SingulartyNET is an anticentric AI platform. Goertzel and his team plan to create a blockchain-based infrastructure that can effectively run a wide range of AI algorithms, from image identification to natural language processing. The system can also be used to track which algorithms are used more and to ensure that the developers act accordingly.
Leave your thought here
Your email address will not be published. Required fields are marked *
Comments (0)