Aritificial Intelligence: -Artificial Intelligence (AI) can be defined in terms of fidelity to human performance, or in an abstract way through rationality but in general, AI is the how to create computer systems that can perform tasks that would normally require human intelligence to accomplish -In other words, AI can be defined as the study of how to make computers do things at which, at the moment, people are better -This involves a combination of techniques, including knowledge representation, automated reasoning, maching learning, and natural language processing
Approaches to AI: -An approach to Ai is a specific method or philosophy that informs how AI is conceived, built and evaluated -Different ways or perspectives that can be taken when designing and developing AI systems:
1. Acting Humanly:
-This approach focuses on creating AI systems that can mimic human behavior, such as speech recognition, language translation, and decision-making. The Turing Test is often used to evaluate the success of this approach, which involves testing whether a human evaluator can distinguish between responses given by a computer and those given by a human
-Examples include speech recognition systems, such as Siri or Alexa, which can understand and respond to human voice commands
2. Thinking Humanly:
-This approach focuses on creating AI systems that can model and replicate human thought processes. This approach is based on the idea that human thinking is fundamentally different from other forms of intelligence and that, by understanding how humans think, we can create more advanced and effective AI systems.
-Examples include cognitive architectures such as Soar, ACT-R, MicroPsi
3. Thinking Rationally:
-This approach focuses on creating AI systems that can reason logically and make decisions based on formal rules and representations of knowledge. This involves using techniques such as logic, inference, and knowledge representation to model reasoning.
-Examples include an expert system that provides advice to medical professional based on a database of medical knowledge and rules
4. Acting Rationally:
-This approach focuses on creating AI systems that can make rational decisions based on the available information and their goals. This involves using techniques such as decision theory, game theory, and optimization to maximize expected utility.
-Examples include autonomous robots that can navigate their environments and performs tasks without human intervention
Overall, these approaches to AI are complementary and can be combined to create more robust and intelligent systems. The choice of approach depends on the specific task and domain in which the AI system will be applied.
Turing Test: -Turing test is a test proposed by Alan Turing in 1950 to determine if a machine can exhibit intelligent behavior that is indistinguishable from that of a human -In this test, a human evaluator engages in a natural language conversation with both a human and a machine, without knowing which is which, if the evaluator cannot reliably distinguish between the human and the machine in their conversaion, then the machine is said to have passed the test -The computer would need the following capabilities: 1. Natural langauge processing to communicate successfully in a human language 2. Knowledge representation to store what it knows or ears 3. Automated reasoning to answer questions and to draw new conclusions 4. Machine learning to adapt to new circumstances and to detect and extrapolate patterns -Other researchers proposed a total Turing test which requires interaction with objects and people in the real world, to pass the total Turing test, a robot will need: 5. Computer vision and speech recognition to perceive the world 6. Robotics to manipulate objects to move about
Importance in AI:
1. The Turing test provides a concrete and objective way to evaluate the intelligence of a machine, by testing its ability to communicate with humans in natural language, rather than relying on subjective or abstract way
2. The Turing test has driven research into the nature of intelligence and the ways in which machines might be able to exhibit intelligent behavior that is comparable to humans.
3. The Turing test has led to the development of conversational agents and chatbots that are able to interact with humans in a way that is increasingly indistinguishable from human conversation.
4. The Turing test has also inspired research into the ethical implications of creating intelligent machines, and the potential risks and benefits of creating machines that can think and communicate like humans.
Ultimately, the Turing test provides a benchmark for evaluating the progress of AI research, by providing a concrete and practical goal that researchers can work towards.
Limitations:
1. The Turing test does not necessarily measure intelligence in a way that is consistent with human intelligence, as it relies heavily on the ability to mimic human behavior and language, rather than exhibiting truly intelligent behavior.
2. The Turing test assumes that intelligence is a black and white, pass or fail phenomenon, but in reality, intelligence is a complex and multifaceted concept that cannot be reduced to a simple yes or no answer.
3. The Turing test is focused on communication and language, and may not be able to assess other important aspects of intelligence, such as creativity, problem-solving, or emotional intelligence.
4. The Turing test is limited by the fact that it relies on human judges to evaluate the intelligence of machines, and different judges may have different criteria or biases that can affect the results of the test.
While the test can help to evaluate the behavior of machines in a human-like context, it may not be able to fully capture the potential consequences of creating machines that are capable of exhibiting truly intelligent behavior.
Value-alignment Problem: -The value-alignment problem refers to the challenge of ensuring that a machine’s goals and values align with those of human society. -As we move into the real word, it becomes more and more difficult to specify the objective completely and correctly -For example, in designing a self-driving car, one might think that the objective is to reach the destination safely but driving along any road incurs a risk of injury due to other errant drivers, equipment failure, and so on; thus, a strict goal of safety requires staying in the garage so there is a tradefff between making progress towards the destination and incurring a risk of injury -If we are developing an AI system in the lab or in a simulator—as has been the case for most of the field’s history—there is an easy fix for an incorrectly specified objective: reset the system, fix the objective, and try again, as the field progresses towards increasingly capable intelligent systems that are deployed in the real world, a system deployed with an incorrect objective will have negative consequences
History:
-
The inception of artificial intelligence (1943–1956): -Warren McCulloch and Walter Pitts proposed a computational model of the brain in 1943 -Alan Turing proposed the Turing Test as a way to determine whether a machine could think like a human. -The Dartmouth Conference marked the beginning of AI as a field of study.
-
Early enthusiasm, great expectations (1952–1969): -John McCarthy coined the term “artificial intelligence” in 1955 -The development of search algorithms, game playing programs like the first chess program developed by Claude Shannon, and the first AI programs that could prove mathematical theorems, such as the Logic Theorist developed by Allen Newell and Herbert Simon.
-
A dose of reality (1966–1973): -AI researchers began to realize that many of the early promises of AI were not going to be fulfilled in the near future. -The development of the Shakey robot by SRI International, which was capable of planning actions and carrying out tasks in the physical world, but was limited in its abilities.
-
Expert systems (1969–1986): -The rise of expert systems, such as MYCIN and DENDRAL, which were used in medical diagnosis and chemical analysis, respectively -The emergence of the first AI startups, including Intellicorp and Symbolics
-
The return of neural networks (1986–present): -Neural networks were rediscovered as a way to develop machine learning algorithms, leading to the development of backpropagation algorithm. -Development of reinforcement learning techniques for training agents to make decisions in complex environments
-
Probabilistic reasoning and machine learning (1987–present): -The rise of probabilistic reasoning and graphical models, such as Bayesian networks and Markov random fields -The development of machine learning techniques, such as support vector machines, decision trees, and random forests
-
Big data (2001–present): -The explosion of digital data from sources such as social media, sensors, and scientific instruments -The development of tools and techniques for storing, processing, and analyzing large datasets, such as Hadoop and Spark
-
Deep learning (2011–present) -The breakthroughs in deep learning research, including the development of convolutional neural networks and recurrent neural networks -The use of deep learning in a wide range of applications, such as image and speech recognition, natural language processing,
Applications of AI:
-
Expert sytems: -Give shits
-
Natural Language Processing (NLP): -NLP involves teaching machines to understand human language. It has been used in applications such as speech recognition, machine translation, and sentiment analysis. Examples include IBM’s Watson, which was used to compete on Jeopardy! in 2011, and Google Translate, which uses machine learning to translate text between languages.
-
Robotics: -Robotics involves teaching machines to interact with the physical world. Applications include manufacturing, healthcare, and space exploration. Examples include the da Vinci Surgical System, which is used to perform minimally invasive surgery, and NASA’s Mars Rover, which is used to explore the surface of Mars.
-
Gaming: -AI has been used to create intelligent agents that can play games such as chess, poker, and video games. Examples include IBM’s Deep Blue, which defeated world chess champion Garry Kasparov in 1997, and OpenAI’s Dota 2 bot, which defeated professional players in 2017
-
Recommender systems: -Recommender systems use AI to provide personalized recommendations to users. Applications include e-commerce, social media, and online advertising. Examples include Netflix’s recommendation engine, which uses machine learning to suggest movies and TV shows to users, and Amazon’s product recommendation system, which suggests products based on users’ browsing and purchase history.
-
Computer vision: -Computer vision involves teaching machines to understand visual information. Applications include image and video analysis, facial recognition, and self-driving cars. Examples include DeepMind’s AlphaGo, which defeated a human world champion in the game of Go in 2016, and Tesla’s Autopilot, which uses computer vision to drive cars autonomously.
-
Finance prediction
Agent: -An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators -An agent’s choice of action at any given instant can depend on its built-in knowledge and on the entire percept sequence observed to date, but not on anything it hasn’t perceived -A rational agent is one that does the right thing, when an agent is plunked down in an envrionment, it generates a sequence of actions according to the percepts it receives causes sthe environment to go through a sequence of states, if the sequence is desirable, then the agen thas performed well -For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has -The performance measure, is intially at least, in the mind of the designer of the machine, or in the mind of the users the machine is designed for, some agent have an explicit representation of the performance measure, while in other designs the performance measure is entirely implicit - the agent may do the right thing, but it does not know why -As a general rule, it is better to design performanace measures according to what one actually wants to be achieved in the environment, rather than according to how one thinks the agent should behave -Rationality maximizes expected performance while omniscience maximizes the actual performance -In designing an agent, the first step must always be to specify the PEAS as fully as possible
Types of agents: (Each figure)
-
Table-driven agents: -These agents are based on a pre-defined table of conditions and actions, and they simply match the current situation to the appropriate action in the table. These agents do not have the ability to learn or reason beyond their pre-defined rules. -An example of a table-driven agent is a thermostat that turns on or off based on the temperature readings.
funtion TABLE-DRIVEN-AGENT (percept) returns an action persistent: percepts, a sequence, intially empty table, a table of actions, indexed by percept sequences, initially fully specified
append percept at the end of percepts action <- LOOKUP(percepts, table) return action -
Simple reflex agents: -These agents operate based on a set of if-then rules, mapping states to actions. However, they do not have the ability to consider the future consequences of their actions. -An example of a simple reflex agent is a traffic light that changes colors based on the presence of vehicles.
function SIMPLE-REFLEX-AGENT (percept) returns an action persistent: rules, a set of condition-action rules
state <- INTERPRET-INPUT(percept) rule <- RULE-MATCH(state, rules) action <- rule.ACTION return action -
Model-based reflex agents: -These agents maintain an internal model of the world and use it to make decisions based on the current state and future consequences. -An example of a model-based reflex agent is a chess-playing computer program that uses a model of the board and possible moves to choose its next move.
function MODEL-BASED-REFLEX-ACTION (percept) returns an action persistent: state, the agent’s current conception of the world state transition_model, a description of how the next state depends on the current state and action sensor_model, a description of how the current world state is reflected in the agent’s percepts rules, a set of condition-action rules action, the most recent action, initially none
state <- UPDATE-STATE(state, action, percept, transition_model, sensor_model) rule <- RULE-MATCH(state, rules) action <- rule.ACTION
(yesma transition model lai what my actions do? and how world evolves bhanna milxa ni hai?)
-
Goal-based (model-based) agents: -These agents have a set of goals they are trying to achieve, and they make decisions based on those goals. They can reason about future actions and the consequences of those actions. -An example of a goal-based agent is a personal assistant that schedules your meetings based on your preferences and availability.
-
Utility-based (model-based) agents: -These agents are similar to goal-based agents, but they also consider the relative value or utility of achieving different goals. They can make trade-offs between competing goals based on their importance. -An example of a utility-based agent is a self-driving car that chooses between different routes based on factors such as travel time, safety, and fuel efficiency.
(same hun duita last ko duita haru just the difference being that, model based chai conditional hunxa, goal based goal ko najik jana khojxa, utility based le utility lai optimize garna khojxa)
(aba model ta banaiyo tara yiniharu ko rules chai kasle bhanxa)
Learning Agents: [FIG] -Alan Turing proposes to building learning machines and then teach them which is preferred method for creating state-of-the-art systems -Any type of agent (model-based, goal-based, utility-based etc.) can be built as a learning agent [FIG] -The learning element is responsible for making improvements, uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future -The design of the learning element depends very much on the design of the performance element which is responsible for selecting external actions, when trying to design an agent that learns a certain capability, the first question is not “How am I going to get it to learn this?” but “What kind of performance element will my agent use to do this once it has learned how?” -The critic is necessary because the percepts themselves provide no indication of the agent’s success, for example, a chess program could receive a percept indicating that it has checkmated its opponent, but it needs a performance standard to know that this is a good thing; the percept itself does not say so -Problem generator is responsible for suggesting actions that will lead to new and informative experiences, if the performance element had its way, it would keep doing the actions that are best, given what it knows, but if the agent is willing to explore a little and do some perhaps suboptimal actions in the short run, it might discover much better actions for the long run -The learning element can make changes to any of the components, observation of paris of successive states of the environment can allow the agent to learn “What my actions do?” and “How the world evolves?” in response to its actions -For example: (taxi driver le brake lai yeti saaro dabayo bhane yeti deccearation hunxa, problem generator le chai ajhai explore gar bhanxa, plus arko example ma tips payena bhane negative reward ho bhanni bujhna paryo)