书名:Handbook of simulation optimization
ISBN\ISSN:9781493913831,9781493913848
前言
Arguably, the two most powerful operations research/management science (OR/MS) techniques are simulation and optimization. Simulation in this book will refer to stochastic simulation, whereby there is randomness in the system, also known as Monte Carlo simulation. Optimization dates back many centuries and is generally considered the older of the two siblings. Both approaches were propelled forward by the advent of the digital computer over half a century ago, leading up to the present golden age when both routinely address complex large-scale real-world problems and both are implemented in a large variety of computer software packages. However, combining the two techniques is a more recent development, and software effectively integrating the two is relatively limited; thus, simulation optimization remains an exciting and fertile area of research. The purpose of the handbook is to provide an overview of the state of the art of simulation optimization, comprising a survey of the most well-established approaches and a sampling of recent research advances in theory/methodology.
The single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches to problems of interest. The intended audience includes researchers, practitioners, and graduate students in the business/engineering fields of operations research, management science, operations management, and stochastic control, as well as in economics/finance and computer science.
In addition to sincerely thanking the authors for their contributions, which have resulted in a high-quality volume, I wish to gratefully acknowledge the support and encouragement of Fred Hillier, the Series Editor, whose persistence ensured that I would not give up on this project when other commitments appeared overwhelming. Last but certainly not least, I'd like to thank Marie Chau for her superb editorial assistance. College Park, MD, USA Michael C. Fu February 2014
查看更多
目录
1 Overview of the Handbook 1
References 6
2 Discrete Optimization via Simulation 9
2.1 Introduction 9
2.2 Optimality Conditions 13
2.3 Ranking and Selection 16
2.4 Ordinal Optimization 22
2.5 Globally Convergent Random Search Algorithms 25
2.6 Locally Convergent Random Search Algorithms 33
2.7 Algorithm Enhancements 39
2.8 Using Commercial Solvers 39
2.8.1 Preliminary Experiment to Control Sampling Variability 40
2.8.2 Restarting the Optimization 40
2.8.3 Statistical Clean Up After Search 41
References 41
3 Ranking and Selection: Efficient Simulation Budget Allocation 45
3.1 Introduction 45
3.1.1 Intuitive Explanations of Simulation Budget Allocation 46
3.1.2 Overview ofRanking and Selection (R&S) 47
3.1.3 Organization 49
3.2 Problem Formulation and Selection Procedures 49
3.2.1 Problem Formulation of Selecting the Best 49
3.2.2 A Generic Algorithm for Selection Procedures 51
3.2.3 General Concepts for OCBA and EVI 52
3.3 Optimal Computing Budget Allocation (OCBA) 54
3.3.1 Maximization of PCS 54
3.3.2 Asymptotic Allocation Rule 54
3.3.3 Sequential Heuristic Algorithm for Allocation 55
3.3.4 Numerical Results 57
3.3.5 Minimization of EOC 59
3.3.6 Other Variants 60
3.4 Expected Value of Information (EVI) 63
3.4.1 Linear Loss (LL) 64
3.4.2 Small-Sample EVI Allocation Rule (LL_1) 68
3.4.3 Stopping Rules 69
3.4.4 Numerical Results for LL and LL1 70
3.4.5 Economics of Selection Procedures (ESP) 72
3.4.6 Other Variants of EVI 76
3.5 Conclusion 77
References 78
4 Response Surface Methodology 81
4.1 Introduction 81
4.2 RSM Basics 83
4.3 RSM in Simulation 87
4.4 Adapted Steepest Descent (ASD) 88
4.5 Multiple Responses: Generalized RSM 89
4.6 Testing an Estimated Optimum in GRSM: KKT Conditions 93
4.7 Robust Optimization 98
4.8 Conclusions 102
References 102
5 Stochastic Gradient Estimation 105
5.1 Introduction 105
5.2 Indirect Gradient Estimators 108
5.2.1 Finite Differences 109
5.2.2 Simultaneous Perturbation 110
5.3 Direct Gradient Estimators 111
5.3.1 Derivatives of Random Variables 115
5.3.2 Derivatives of Measures 116
5.3.3 Input Distribution Examples 118
5.3.4 Output Examples 125
5.3.5 Rudimentary Theory 132
5.3.6 Guidelines for the Practitioner 135
5.4 Quantile Sensitivity Estimation 136
5.5 New Approaches for Using Direct Stochastic Gradients in Simulation Optimization 139
5.5.1 Direct Gradient Augmented Regression (DiGAR) 139
5.5.2 Stochastic Kriging with Gradients (SKG) 141
5.5.3 Gradient Extrapolated Stochastic Kriging (GESK) 142
5.5.4 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) 144
5.6 Concluding Remarks 145
References 145
6 An Overview of Stochastic Approximation 149
6.1 Introduction 149
6.2 Classical Methods 153
6.2.1 Robbins-Monro (RM) Algorithm 153
6.2.2 Kiefer-Wolfowitz (KW) Algorithm 154
6.3 Well-Known Variants 156
6.3.1 Kesten's Rule 156
6.3.2 Averaging Iterates 157
6.3.3 Varying Bounds 158
6.3.4 Simultaneous Perturbation Stochastic Approximation (SPSA) 160
6.4 Recent Modifications 161
6.4.1 Scaled-and-Shifted Kiefer-Wolfowitz (SSKW) 162
6.4.2 Robust Stochastic Approximation (RSA) 164
6.4.3 Accelerated Stochastic Approximation (AC-SA) for Strongly Convex Functions 165
6.4.4 Accelerated Stochastic Approximation (AC-SA) for Convex Functions 168
6.4.5 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) 169
6.5 Numerical Experiments 170
6.6 Concluding Remarks 175
References 176
7 Stochastic Approximation Methods and Their Finite-Time Convergence Properties 179
7.1 Introduction 179
7.2 Convex Stochastic Optimization 182
7.2.1 Robust SA Method for Nonsmooth Stochastic Convex Optimization 183
7.2.2 Accelerated SA Methods for Nonsmooth and Smooth Stochastic Optimization 185
7.2.3 Accelerated SA Methods for Strongly Convex Optimization 191
7.3 Nonconvex Stochastic Optimization 195
7.4 Randomized Stochastic Zeroth-Order (SZO) Methods 200
7.5 Summary 204
References 205
8 A Guide to Sample Average Approximation 207
8.1 Introduction 207
8.2 When Is SAA Appropriate? 210
8.3 Detecting When SAA Is Appropriate 216
8.4 Known Properties 221
8.4.1 Almost Sure Convergence 222
8.4.2 Convergence Rates for the SAA Method 225
8.4.3 The SAA Method in the Nonconvex Case 228
8.5 SAA Implementation 230
8.5.1 Sample Size Choice 231
8.5.2 Refined SAA Methods 232
8.6 Asymptotic Efficiency Calculation 234
8.6.1 Asymptotic Rates for Stochastic Approximation 235
8.6.2 Asymptotic Rates for the SAA Method 236
8.6.3 Asymptotic Rates for the RA Method 238
8.7 Conclusions 240
References 241
9 Stochastic Constraints and Variance Reduction Techniques 245
9.1 Introduction 245
9.2 Problems with Stochastic Constraints 248
9.2.1 Problems with General Expected-Value Constraints 249
9.2.2 Problems with Probabilistic Constraints 253
9.2.3 Problems with Stochastic Dominance Constraints 256
9.3 Variance Reduction Techniques 260
9.3.1 Antithetic Variates 261
9.3.2 Latin Hypercube Sampling 262
9.3.3 Quasi-Monte Carlo 265
9.3.4 Importance Sampling 267
9.3.5 Control Variates 269
9.4 Conclusions 270
References 271
10 A Review of Random Search Methods 277
10.1 Introduction 277
10.2 Structure of Random Search Methods for Simulation Optimization 278
10.3 Discrete Simulation Optimization 279
10.3.1 Simulated Annealing 279
10.3.2 Other Developments 282
10.4 Extensions 285
10.4.1 Continuous Simulation Optimization 285
10.4.2 Simulation Optimization with Multiple Objectives 287
References 288
11 Stochastic Adaptive Search Methods: Theory and Implementation 293
11.1 Introduction 294
11.2 Optimization Models for Complex Systems with Noise 295
11.3 Performance Analyses of Stochastic Adaptive Search Methods 297
11.3.1 Pure Random Search and Pure Adap tive Search 298
11.3.2 Hesitant Adaptive Search and Backtracking Adaptive Search 302
11.3.3 Annealing Adaptive Search 304
11.4 Optimization Algorithms using Hit-and-Run 305
11.4.1 Improving Hit-and-Run (IHR) 306
11.4.2 Simulated Annealing with Hit-and-Run 307
11.4.3 Population-Based Simulated Annealing (Interacting Particle Algorithm) with Hit-and-Run 310
11.4.4 Pattern Hit-and-Run (PHR) 311
11.5 Estimation and Optimization 313
11.6 Conclusions 315
References 315
12 Model-Based Stochastic Search Methods 319
12.1 Introduction 319
12.2 A Brief Review of Model-Based Methods 321
12.3 A Model Reference Optimization Framework 323
12.4 A Stochastic Approximation Framework 328
12.5 A Stochastic Averaging Approach 335
12.6 Conclusions and Open Research Questions 337
References 338
13 Solving Markov Decision Processes via Simulation 341
13.1 Introduction 341
13.2 Background 343
13.3 Discounted Reward MDPs 347
13.3.1 Q-Learning 348
13.3.2 Q-Learaing with Function Approximation 350
13.3.3 SARSA(λ) 352
13.3.4 Approximate Policy Iteration 356
13.3.5 Actor-Critic Algorithm 357
13.3.6 Evolutionary Policy Iteration 359
13.4Average Reward MDPs 362
13.4.1 Relative Q-Leaming 362
13.4.2 R-SMART 363
13.5 Finite Horizon MDPs 364
13.5.1 Special Case of Stochastic Shortest Path (SSP) 365
13.5.2 Pursuit Learning Automata (PLA) Sampling 366
13.6 Numerical Results 368
13.7 Extensions 372
13.8 Convergence Theory 373
13.9 Concluding Remarks 373
References 375
Index 381
查看PDF
查看更多
馆藏单位
中国医科院医学信息研究所