0

Full Content is available to subscribers

Subscribe/Learn More  >

Sub-Optimal Stationary Policies for a Class of Stochastic Optimization Problems Arising in Robotic Surveillance Applications

[+] Author Affiliations
M. Park, S. Darbha

Texas A&M University, College Station, TX

K. Krishnamoorthy

Infoscitex Corporation, Dayton, OH

P. P. Khargonekar

University of Florida, Gainesville, FL

M. Pachter

Air Force Institute of Technology, Wright-Patterson AFB, OH

P. Chandler

Air Force Research Laboratory, Wright-Patterson AFB, OH

Paper No. DSCC2012-MOVIC2012-8610, pp. 263-272; 10 pages
doi:10.1115/DSCC2012-MOVIC2012-8610
From:
  • ASME 2012 5th Annual Dynamic Systems and Control Conference joint with the JSME 2012 11th Motion and Vibration Conference
  • Volume 3: Renewable Energy Systems; Robotics; Robust Control; Single Track Vehicle Dynamics and Control; Stochastic Models, Control and Algorithms in Robotics; Structure Dynamics and Smart Structures; Surgical Robotics; Tire and Suspension Systems Modeling; Vehicle Dynamics and Control; Vibration and Energy; Vibration Control
  • Fort Lauderdale, Florida, USA, October 17–19, 2012
  • Conference Sponsors: Dynamic Systems and Control Division
  • ISBN: 978-0-7918-4531-8
  • Copyright © 2012 by ASME

abstract

This paper deals with the development of sub-optimal decision making algorithms for a collection of robots in order to aid a remotely located human operator in the task of classification of incursions across a perimeter in a surveillance application. The operator is tasked with the classification of incursion as either a nuisance or a threat. Whenever there is an incursion into the perimeter, Unattended Ground Sensors (UGS) raise an alert and the robots service the alerts by visiting the alert location and collecting evidence in the form of video and other images and transmit them to the operator. There are two competing needs for a robot: it needs to spend more time at an alert location for aiding the operator in accurate classification and it needs to service the alerts as quickly as possible so that the evidence collected is relevant. A natural problem is to determine the optimal amount of time a robot must spend servicing an alert. In this paper, we discretize the problem spatially and temporally and recast the optimization problem as follows: Is it better for a robot to spend the next time interval at the alert location in terms of maximizing the expected, discounted payoff? The payoff associated with a state is an increasing function of the time spent by a robot servicing an alert and a decreasing function of the number of unserviced alerts. This problem can be easily be cast as a Markov Decision Process (MDP). However, the number of states runs into billions even for a modest size problem. We consider Approximate Dynamic Programming via linear programming as this approach provides an upper (and lower) bound on the optimal expected discounted payoff and enables the construction of a suboptimal policy. The bounds may then be used to provide an estimate of the quality of sub-optimal policy employed. We also provide a computationally tractable way of computing the lower bound using linear programming. Finally, numerical results supporting our method are provided.

Copyright © 2012 by ASME

Figures

Tables

Interactive Graphics

Video

Country-Specific Mortality and Growth Failure in Infancy and Yound Children and Association With Material Stature

Use interactive graphics and maps to view and sort country-specific infant and early dhildhood mortality and growth failure data and their association with maternal

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In