Call for Papers - Ongoing round of submission, notification and publication.
    
  
Home    |    Login or Register    |    Contact CSC
By Title/Keywords/Abstract   By Author
Browse CSC-OpenAccess Library.
  • HOME
  • LIST OF JOURNALS
  • AUTHORS
  • EDITORS & REVIEWERS
  • LIBRARIANS & BOOK SELLERS
  • PARTNERSHIP & COLLABORATION
Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available
(no registration required)

(753.37KB)


-- CSC-OpenAccess Policy
-- Creative Commons Attribution NonCommercial 4.0 International License
>> COMPLETE LIST OF JOURNALS

EXPLORE PUBLICATIONS BY COUNTRIES

EUROPE
MIDDLE EAST
ASIA
AFRICA
.............................
United States of America
United Kingdom
Canada
Australia
Italy
France
Brazil
Germany
Malaysia
Turkey
China
Taiwan
Japan
Saudi Arabia
Jordan
Egypt
United Arab Emirates
India
Nigeria
Planning in Markov Stochastic Task Domains
Yong Lin, Fillia Makedon
Pages - 54 - 64     |    Revised - 30-08-2010     |    Published - 30-10-2010
Published in International Journal of Artificial Intelligence and Expert Systems (IJAE)
Volume - 1   Issue - 3    |    Publication Date - October  Table of Contents
MORE INFORMATION
References   |   Abstracting & Indexing
KEYWORDS
Markov decision processes, POMDP, task planning, uncertainty, decision-making
ABSTRACT
In decision theoretic planning, a challenge for Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) is, many problem domains contain big state spaces and complex tasks, which will result in poor solution performance. We develop a task analysis and modeling (TAM) approach, in which the (PO)MDP model is separated into a task view and an action view. In the task view, TAM models the problem domain using a task equivalence model, with task-dependent abstract states and observations. We provide a learning algorithm to obtain the parameter values of task equivalence models. We present three typical examples to explain the TAM approach. Experimental results indicate our approach can greatly improve the computational capacity of task planning in Markov stochastic domains.
ABSTRACTING & INDEXING
1 Google Scholar 
2 Academic Index 
3 CiteSeerX 
4 refSeek 
5 Scribd 
6 SlideShare 
7 PDFCAST 
8 PdfSR 
REFERENCES
Boutilier, C., Dearden, R., & Goldszmidt, M. (1995). Exploiting structure in policy construction. In Proceedings of IJCAI, pp. 1104-1113.
Chang, A., & Amir, E. (2006). Goal achievement in partially known, partially observable domains. In Proceedings of ICAPS, pp. 203-211. AAAI.
Dean, T., & hong Lin, S. (1995). Decomposition techniques for planning in stochastic domains. In Proceedings of IJCAI, pp. 1121-1127. Morgan Kaufmann.
Deák, F., Kovács, A., Váncza, J., & Dobrowiecki, T. P. (2001). Hierarchical knowledge-based process planning in manufacturing. In Proceedings of the IFIP 11 International PROLAMAT Conference on Digital Enterprise, pp. 428-439.
Dietterich, T. G. (2000). Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13, 227-303.
Givan, R., Dean, T., & Grieg, M. (2003). Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, 147 (1-2), 163-223.
Hansen, E. A., & Zhou, R. (2003). Synthesis of hierarchical finite-state controllers for POMDPs. In Proceedings of ICAPS, pp. 113-122. AAAI.
Hsiao, K., Kaelbling, L. P., & Lozano-Pérez, T. (2007). Grasping POMDPs. In Proceedings of ICRA, pp. 4685-4692.
Kurniawati, H., Hsu, D., & Lee, W. S. (2008). Sarsop: Efficient point-based pomdp planning by approximating optimally reachable belief spaces. In Proceedings of Robotics Science and Systems.
Lekavý, M., & Návrat, P. (2007). Expressivity of STRIPS-like and HTN-like planning. In Agent and Multi-Agent Systems: Technologies and Applications, First KES International Symposium,Vol. 4496, pp. 121-130. Springer.
Littman, M. L., Cassandra, A. R., & Kaelbling, L. P. (1995). Learning policies for partially observable environments: scaling up. In Proceedings of ICML, pp. 362-370.
Pineau, J., Roy, N., & Thrun, S. (2001). A hierarchical approach to pomdp planning and execution. In Workshop on Hierarchy and Memory in Reinforcement Learning (ICML).
Potts, D., & Hengst, B. (2004). Discovering multiple levels of a task hierarchy concurrently. Robotics and Autonomous Systems, 49 (1-2), 43-55.
Poupart, P., & Boutilier, C. (2002). Value-directed compression of POMDPs. In Proceedings of NIPS, pp. 1547-1554.
Singh, S. P., & Cohn, D. (1997). How to dynamically merge markov decision processes. In Proceedings of NIPS.
Smith, T., & Simmons, R. G. (2004). Heuristic search value iteration for POMDPs. In Proceedings of UAI.
Smith, T., & Simmons, R. G. (2005). Point-based POMDP algorithms: Improved analysis and implementation. In Proceedings of UAI, pp. 542-547.
MANUSCRIPT AUTHORS
Mr. Yong Lin
University of Texas at Arlington - United States of America
bracelyn@gmail.com
Professor Fillia Makedon
- United States of America


CREATE AUTHOR ACCOUNT
 
LAUNCH YOUR SPECIAL ISSUE
View all special issues >>
 
PUBLICATION VIDEOS
 
You can contact us anytime since we have 24 x 7 support.
Join Us|List of Journals|
    
Copyrights © 2025 Computer Science Journals (CSC Journals). All rights reserved. Privacy Policy | Terms of Conditions