UBC Department - Computer Science at UBC

UBC Department - Computer Science at UBC

UBC Department of Computer Science Undergraduate Events More details @ https://my.cs.ubc.ca/students/development/events Simba Technologies Tech Talk/ Info Session Mon., Sept 21 6 7 pm DMP 310 Resume Editing Drop-in Sessions Mon., Sept 28 10 am 2 pm (sign up at 9 am) ICCS 253 Facebook Crush Your Code Workshop EA Info Session Tues., Sept 22 6 7 pm DMP 310 Mon., Sept 28 6 8 pm

DMP 310 Co-op Drop-in FAQ Session UBC Careers Day & Professional School Fair Thurs., Sept 24 12:30 1:30 pm Reboot Cafe Wed., Sept 30 & Thurs., Oct 1 10 am 3 pm AMS Nest Intelligent Systems (AI2) Computer Science cpsc422, Lecture 6 Sep, 21, 2015 Slide credit POMDP: C. Conati and P. Viswanathan CPSC422, Lecture 6

Slide 2 Lecture Overview Partially Observable Markov Decision Processes Summary Belief State Belief State Update Policies and Optimal Policy CPSC422, Lecture 6 3 Markov Models Markov Chains Hidden Markov Model Partially Observable Markov Decision

Processes (POMDPs) Markov Decision Processes (MDPs) CPSC422, Lecture 6 Slide 4 Belief State and its Update b(s ) b' ( s ' ) P(e | s ' ) P( s ' | s, a)b( s ) s as b' Forward(b,a,e) To summarize: when the agent performs action a in belief state b, and then receives observation e, filtering gives a unique new probability distribution over state deterministic transition from one belief state to another CPSC422, Lecture 6

5 Optimal Policies in POMDs ? Theorem (Astrom, 1965): The optimal policy in a POMDP is a function *(b)b)) where b is the belief state (probability distribution over states) That is, *(b)b)) is a function from belief states (probability distributions) to actions It does not depend on the actual state the agent is in Good, because the agent does not know that, all it knows are its beliefs! Decision Cycle for a POMDP agent Given current belief state b), execute a = *(b)b)) Receive observation e compute : b' ( s ' ) P(e | s ' ) P( s ' | s, a )b( s) Repeat

s CPSC422, Lecture 6 6 How to Find an Optimal Policy? ? Turn a POMDP into a corresponding MDP and then solve that MDP Generalize VI to work on POMDPs Develop Approx. Methods Point-Based VI Look Ahead CPSC422, Lecture 6 7 Finding the Optimal Policy: State of the Art Turn a POMDP into a corresponding MDP and then apply VI: only small models

Generalize VI to work on POMDPs 10 states in1998 200,000 states in 2008-09 Develop Approx. Methods Point-Based VI and Look Ahead Even 50,000,000 states http://www.cs.uwaterloo.ca/~ppoupart/software.html CPSC422, Lecture 6 8 Dynamic Decision Networks (DDN) Comprehensive approach to agent design in partially observable, stochastic environments Basic elements of the approach Transition and observation models are represented via a Dynamic Bayesian Network (DBN). The network is extended with decision and utility nodes, as done in decision networks At-2

At At-1 At+2 Rt Rt-1 Et-1 At+1 Et CPSC422, Lecture 6 9 Dynamic Decision Networks (DDN) A filtering algorithm is used to incorporate each new percept and the action to update the belief state Xt

Decisions are made by projecting forward possible action sequences and choosing the best one: look ahead search At-2 At At-1 At+2 Rt Rt-1 Et-1 At+1 Et CPSC422, Lecture 6

10 Dynamic Decision Networks (DDN) At-2 At-1 Filtering At At+1 At+2 Projection (3-step look-ahead here) Nodes in yellow are known (evidence collected, decisions made, local rewards) Agent needs to make a decision at time t (A node) t Network unrolled into the future for 3 steps Node U represents the utility (or expected optimal reward V*) in state X

t+3 t+3 i.e., the reward in that state and all subsequent rewards Available only in approximate form (from another approx. method) CPSC422, Lecture 6 13 Look Ahead Search for Optimal Policy General Idea: Expand the decision process for n steps into the future, that is Try all actions at every decision point Assume receiving all possible observations at observation points Result: tree of depth 2n+1 where every branch represents one of the possible sequences of n actions and n observations available to the agent, and the corresponding belief states The leaf at the end of each branch corresponds to the b)elief state reachable via that sequence of actions and observations use filtering to compute it Back Up the utility values of the leaf nodes along their corresponding branches, combining it with the rewards along that

path Pick the branch with the highest expected value CPSC422, Lecture 6 14 Look Ahead Search for Optimal Policy These are chance nodes, describing the probability of each observation Decision At in P(Xt|E1:tA1:t-1 ) akt a1t a2t Observation Et+1 e1t+1

At+1 in P(Xt+1|E1:t+1 A1:t) e2t+1 |Et+2 At+2 in P(Xt+1|E1:t+2A1:t+1) ekt+k Belief states are computed via any filtering algorithm, given the sequence of actions and observations up to that point To back up the utilities take average at chance point Take max at decision points

|Et+3 P(Xt+3|E1:t+3 A1:t+2) CPSC422, Lecture 6 |U(Xt+3) 15 Best action at time t? A. a1 B. a2 CPSC422, Lecture 6 C. indifferent 16 CPSC422, Lecture 6

17 Look Ahead Search for Optimal Policy What is the time complexity for exhaustive search at depth d, with |A| available actions and |E| possible observations? A. O(b)d *|A| * |E|) B. O(b)|A|d * |E|d) C. O(b)|A|d * |E|) Would Look ahead work better when the discount factor is? A. Close to 1 B. Not too close to 1 CPSC422, Lecture 6

18 Finding the Optimal Policy: State of the Art Turn a POMDP into a corresponding MDP and then apply VI: only small models Generalize VI to work on POMDPs 10 states in1998 200,000 states in 2008-09 Develop Approx. Methods Point-Based VI and Look Ahead Even 50,000,000 states http://www.cs.uwaterloo.ca/~ppoupart/software.html CPSC422, Lecture 6 19 Some Applications of POMDPs S Young, M Gasic, B Thomson, J Williams (2013) POMDP-based Statistical Spoken Dialogue Systems: a Review, Proc IEEE, J. D. Williams and S. Young. Partially observable Markov decision

processes for spoken dialog systems. Computer Speech & Language, 21(2):393422, 2007. S. Thrun, et al. Probabilistic algorithms and the interactive museum tour-guide robot Minerva. International Journal of Rob)otic Research, 19(11):972999, 2000. A. N.Rafferty,E. Brunskill,Ts L. Griffiths, and Patrick Shafto. Faster teaching by POMDP planning. In Proc. of Ai in Education, pages 280 287, 2011 P. Dai, Mausam, and D. S.Weld. Artificial intelligence for artificial artificial intelligence. In Proc. of the 25th AAAI Conference on AI , 2011. [intelligent control of workflows] CPSC422, Lecture 6 20 Another famous Application Learning and Using POMDP models of Patient-Caregiver nteractions During Activities of Daily Living Goal: Help Older adults living

with cognitive disabilities (such as Alzheimer's) when they: forget the proper sequence of tasks that need to be completed they lose track of the steps that they have already completed. Source: Jesse Hoey CPSC422, Lecture 6 UofT 2007 Slide 21 R&R systems BIG PICTURE Proble m Constraint Environment Stochastic Determinis Arc Consistency

tic Vars + Satisfacti Constrai on Stat nts ic Logics Query Search SLS Search Sequenti al Plannin g Representati on

Reasoning Technique STRIPS Search CPSC422, Lecture 6 Belief Var. Nets Elimination Approx. Inference Markov Chains and HMMs Temporal. Inference Decision Var. Elimination Nets

Markov Decision Processes Value Iteration POMDP Approx. s Inference Slide 22 422 big pictureHybrid: Det Deterministic Stochastic Belief Nets Logics First Order Logics Ontologies Query Temporal rep. Full Resolution

SAT Planning Prob+Sto CFG Prob Relational Models Markov Logics Approx. : Gibbs Markov Chains and HMMs Forward, Viterbi. Approx. : Particle Filtering Undirected Graphical Models Conditional Random Fields Markov Decision Processes and

Partially Observable MDP Value Iteration Approx. Inference Reinforcement Learning Applications of AI CPSC 422, Lecture 34 Representati on Reasoning Technique Slide 23 Learning Goals for todays class You can: Define a Policy for a POMDP Describe space of possible methods for computing optimal policy for a given POMDP

Define and trace Look Ahead Search for finding an (approximate) Optimal Policy Compute Complexity of Look Ahead Search CPSC 322, Lecture 36 Slide 24 TODO for next Wed Read textbook 11.3 (Reinforcement Learning) 11.3.1 Evolutionary Algorithms 11.3.2 Temporal Differences 11.3.3 Q-learning Assignment 1 will be posted on Connect today VInfo and VControl MDPs (Value Iteration) POMDPs

CPSC422, Lecture 6 Slide 25 In practice, the hardness of POMDPs arises from the complexity of policy spaces and the potentially large number of states. Nervertheless, real-world POMDPs tend to exhibit a significant amount of structure, which can often be exploited to improve the scalability of solution algorithms. Many POMDPs have simple policies of high quality. Hence, it is often possible to quickly find those policies by restricting the search to some class of compactly representable policies. When states correspond to the joint instantiation of some random variables (features), it is often possible to exploit various forms of probabilistic independence (e.g., conditional independence and context-specic independence), decomposability (e.g., additive separability) and sparsity in the POMDP dynamics to mitigate the impact of large state spaces. CPSC422, Lecture 6 26

Symbolic Perseus Symbolic Perseus - point-based value iteration algorithm that uses Algebraic Decision Diagrams (ADDs) as the underlying data structure to tackle large factored POMDPs Flat methods: 10 states at 1998, 200,000 states at 2008 Factored methods: 50,000,000 states http://www.cs.uwaterloo.ca/~ppoupart/software.html CPSC422, Lecture 6 27 POMDP as MPD By applying simple rules of probability we can derive a: Transition model P(b|a,b) P(b' | a, b) P (b' | e, a, b) P(e | s ' ) P ( s ' | s, a )b( s ) e s'

where P (b' | e, a, b) 1 0 s if b' Forward (e, a, b) otherwise When the agent performs a given action a in belief state b, and then receives observation e, filtering gives a unique new probability distribution over state deterministic transition from one belief state to the next We can also define a reward function for belief states (b) b( s) R( s) ? s CPSC422, Lecture 6

30 Solving POMDP as MPD So we have defined a POMD as an MDP over the belief states Why bother? Because it can be shown that an optimal policy *(b) for this MDP is also an optimal policy for the original POMDP i.e., solving a POMDP in its physical space is equivalent to solving the corresponding MDP in the belief state Great, we are done! CPSC422, Lecture 6 31 POMDP as MDP But how does one find the optimal policy *(b)b))? One way is to restate the POMDP as an MPD in belief state space State space :

space of probability distributions over original states For our grid world the belief state space is? initial distribution <1/9,1/9, 1/9,1/9,1/9,1/9, 1/9,1/9,1/9,0,0> is a point in this space What does the transition model need to specify? CPSC422, Lecture 6 ? 32 Does not work in practice Although a transition model can be effectively computed from the POMDP specification Finding (approximate) policies for continuous, multidimensional MDPs is PSPACE-hard Problems with a few dozen states are often unfeasible Alternative approaches. CPSC422, Lecture 6

33 How to Find an Optimal Policy? Turn a POMDP into a corresponding MDP and then solve the MDP ( ) Generalize VI to work on POMDPs (also ) Develop Approx. Methods () Point-Based Value Iteration Look Ahead CPSC422, Lecture 6 34 Recent Method: Pointbased Value Iteration Find a solution for a sub-set of all states Not all states are necessarily reachable

Generalize the solution to all states Methods include: PERSEUS, PBVI, and HSVI and other similar approaches (FSVI, PEGASUS) CPSC422, Lecture 6 35 How to Find an Optimal Policy? Turn a POMDP into a corresponding MDP and then solve the MDP Generalize VI to work on POMDPs (also ) Develop Approx. Methods () Point-Based VI Look Ahead CPSC422, Lecture 6 36

Recently Viewed Presentations

  • Social Structure Theory - SOCIOLOGY ONLINE!

    Social Structure Theory - SOCIOLOGY ONLINE!

    STRAIN THEORY (Robert Merton - 1957) View crime as a result of lower class FRUSTRATION & ANGER. People who can not achieve goals conventionally turn to crime. Social Structure Theory. STRAIN THEORY (Robert Merton - 1957) Theory of ANOMIE (Lack...
  • KOMPRESI CITRA - Gunadarma

    KOMPRESI CITRA - Gunadarma

    Fractal Compression: adalah suatu metode lossy untuk mengkompresi citra dengan menggunakan kurva fractal. Sangat cocok untuk citra natural seperti pepohonan, pakis, pegunungan, dan awan. Fractal Compression bersandar pada fakta bahwa dalam sebuah image, terdapat bagian-bagian image yang menyerupai bagian image...
  • PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR

    PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR

    Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸. The Bias-Variance Trade-off From these plots, we note that an over-regularized model (large ¸) will have a high bias, while an under-regularized model (small ¸) will have...
  • &quot;Lyme disease&quot; - the European history - LymeRICK

    "Lyme disease" - the European history - LymeRICK

    I appreciate very much - and express my thanks to the organizing commitee - that I have been invited over to UK to talk to you today and I hope that I will not disappoint you. My subject will be...
  • Foundation and Framework of Second Victim Phenomenon Susan

    Foundation and Framework of Second Victim Phenomenon Susan

    1/3 of participants reported significant decrease in joy and meaning of work post event. Major influencers to change role: 1) Inadequate social support and 2) Effects of emotional labor. Rodriquez, J. & Scott, S.D. (2017). Dropping out and starting over:...
  • Meet the Document Family

    Meet the Document Family

    "Meet the Document Family" A Story About Engaging Your Constituents . Sabrina Lozano, CRM, MBA, CIP - Sr. Dir., Records & Info Governance - June 21, 2017
  • Hypertension in the Very Elderly (80+): results of

    Hypertension in the Very Elderly (80+): results of

    Hypertension in the Very Elderly (80+): results of the HYVET trial and some caveats Luka Pocivavsek The University of Chicago, Chicago IL, 60637 USA Question Posed: by Dr. M. Gershberg (PCA) and source was primarily literature search using ISI Web...
  • How to Prepare for an Engineering Career Fair

    How to Prepare for an Engineering Career Fair

    Apply on Sun Devil CareerLink. Apply on company website. Note when they are interviewing. Evaluate your performance. Organize information. Follow up: Write thank you notes, apply on SDCL/websites, revise résuméif necessary. Ensure your voicemail is set-up. Answer your phone professionally!...