The History of Artificial Intelligence
In 1950, Alan Turing published a paper describing how to test a “thinking” machine. He claimed that a machine may be classified as thinking if it could carry on a conversation through teleprinter, resembling a human with no discernible deviations. In 1952, he published the Hodgkin-Huxley model of the brain as an electrical network of neurons, with individual neurons firing in all-or-nothing (on/off) pulses-articlewine. These events, which were discussing at a Dartmouth College meeting in 1956, contributing to inspire the concept of artificial intelligence.
AI development has not been a smooth and efficient process. Artificial intelligence-articlewine research funding was discontinuing in the 1970s after multiple reports condemn a lack of progress. It begin as an exciting, inventive concept in 1956. Efforts to mimic the human brain, known as “neural networks,” were tested and abandoned.
Their most advanced programmes could only solve simple issues, and the unimpressed referred to them as toys. AI researchers had set their goals with overconfidence (a common issue) and had made foolish assumptions about the problems they would face. It should come as no surprise that their funding was stopped after the outcomes they promised never materialized.
AI’s First Winter
Between 1974 to 1980, a period known as “The First AI Winter” occurred. Researchers working on AI had two fundamental limitations: insufficient memory and processor rates that would be considering terrible by today’s standards. Artificial intelligence research, like gravity research at the time, saw its government funding withdrawn, and interest dwindled. Unlike gravity, however, AI research continued in the 1980s, with funding from the United States and the United Kingdom to compete with Japan’s new “fifth generation” computer project and their ambition of becoming the world’s computer technology leader.
The First AI Winter came to a close with the promising advent of “Expert Systems,” which were swiftly creating and deploying by huge competitive organizations-articlewine all over the world. The concept of amassing knowledge from multiple specialists and communicating that knowledge with its users was now the core focus of AI research. In the 1980s, the rebirth of Connectionism aided AI as well.
Systems of Expertise
Throughout the 1970s, expert systems were a popular approach in artificial intelligence research. An Expert System is a programme that is creating by using the expertise of experts. A user asks an Expert System a query, and the Expert System responds with an answer that may or may not be useful. The system uses “rules” of logic to answer questions and solve problems within a clearly defined domain of knowledge.
The software has a simple design that makes it simple to create, build, and modify. In the early 1980s, bank loan screening algorithms were a good example of an Expert System. But there were also medical and sales applications that used Expert Systems. In general, these simple programmes-articlewine proved to be highly useful. They began to save enterprises a significant amount of money.
The Second Winter of AI
From 1987 until 1993, the AI field went through another significant winter. This second lull in AI development corresponded to the perception of XCON and other early Expert System computers-articlewine as slow and awkward. Desktop computers were gaining traction, displacing earlier, larger, and less user-friendly computer banks.
When compared to desktop computers, Expert Systems eventually proved too expensive to maintain. Expert systems were difficult to maintain and could not “learn” new information. These were issues that desktop computers lacked. DARPA (Defense Advanced Research Programs Agency) judged that AI “would not be” the next wave around the same time and moved its funds to projects that were more likely to yield speedy results. As a result, funding for AI research was drastically reducing in the late 1980s, resulting in the Second AI Winter.
The Possibility of Having a Conversation with Computers Has Arrived
Natural language processing (NLP) is a branch of artificial intelligence that makes computers and machines understand human language. In the early 1960s, attempts to employ computers as translators for the Russian and English languages stimulated interest in natural language processing. These attempts sparked ideas about machines that could understand human speech. Attempts to put those ideas into reality were mostly fruitless, and by 1966, “many” had given up totally on the idea-articlewine.
Natural language processing took a quantum leap forward in the late 1980s, thanks to a constant growth in CPU capacity and the introduction of new machine learning methods. These new algorithms primarily focused on statistical models, rather than models such as decision trees. Statistical models for NLP increased considerably in the 1990s.
Agents with Intelligence
The focus of artificial intelligence research switched to intelligent agents in the early 1990s. These intelligent agents can be utilized for things like news retrieval, online purchasing, and web browsing. Intelligent agents are also known as bots or agents. They have increasingly grown into digital virtual assistants and chatbots as a result of the application of Big Data programmes.
Learning from Computers
NLP is developing using machine learning, which is a branch of artificial intelligence. It is still utilizing as a building block for AI. Despite the fact that it has grown into its own industry, taking phone calls and offering a restrict range of relevant responses. Artificial intelligence now includes machine learning and deep learning.
Chatbots and digital virtual assistants
Digital virtual assistants are able to interpret and respond to spoken orders by accomplishing tasks.
Siri (Apple’s digital virtual assistant) gained a reputation as one of the most popular and successful digital virtual assistants for natural language processing in 2011. Online assistants like Alexa, Siri, and Google may have started off as simple providers of weather, news, and traffic updates, but breakthroughs in natural language processing (NLP) and access to enormous amounts of data have transformed these virtual assistants into helpful customer service tools. They can now do many of the same functions as a human helper. They’re even able to tell jokes-articlewine.
Schedules may now be managing, phone calls can be made, dictation can be taken, and emails can be read aloud by digital virtual assistants. Apple’s Siri, Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana are just a few examples of virtual digital assistants available today. These AI assistants may utilize to become hands-free. This because they respond to voice directions. It is allowing a person to drink their coffee or change diapers while the assistant completes the prescribed work-articlewine.
These virtual assistants are the AI research of the future. They’re driving automobiles, posing as robots to provide physical assistance, and conducting research to aid with business decisions. Artificial intelligence continues to develop and find new applications-articlewine.
Alan Turing’s Test is Passed
Digital virtual assistants and chatbots, in my humble view, have passed Alan Turing’s test and attained true artificial intelligence. Artificial intelligence nowadays can define as thinking because of its ability to make judgments. If these entities communicating with a user via teletype, a user would mistakenly believe there was a human on the other end of the line. Turing’s expectations were significantly exceed when these beings were able to speak verbally and recognise faces and other imagery.
If you want to learn more, you can visit Artificial Intelligence Malaysia.
This article is posted on Article Wine.