# Introduction igital computer simulation models have been very successful approach for analyzing the complex systems. In simulation models the analytical expression for input/output relationship is generally unavailable and hence conducting its output performance analysis is a cumbersome task. Based on Barton and Meckesheimer [2] simulation optimization is defined as a repeated analysis of simulation models with different values of design parameters, in an attempt to identify best simulated system performance. Since 1950when simulation optimization has been introduced as a new area for research, many practical problems are modeled as simulation optimization problems and successful results have been achieved. Some recent achievements are the works of Liu and Maghsoodloo, [17] and Kleijnen et al. [14]. Also a number of software packages have been developed with the ability of simulation optimization and have been added to some well-known simulation software's. However, the related literatures are still waiting for developments of more new robust methods with the high results efficiency. For recent surveys in this field, we acknowledge the research studies conducted by Fuand Glover [7], Tekin and Sabuncuoglu [28], Henderson and Nelson [8] and Kleijnen [13]. Since simulation models have stochastic response they potentially require extensive runtime. When an optimization routine is incorporated to the simulation models, this problem will intense dramatically. Due to the stochastic nature of the simulation models, the output must be estimated by averaging of the outputs values over reasonable number of replications [10]. In addition the simulation models are usually considered as the black-box models in which, the output function is not usually expressed explicitly. Therefore determining the best combination of controllable variables which provide the best output value can be done either by fitting an explicit function to the output values or by one of the algorithmic inputoutput optimization routines. One of the approaches is the use of metamodel-based methods. Metamodels provide deterministic objectives, which surrogate simulation models, and generally need fewer computational efforts. One of the powerful metamodel, addressed in the literatures, is Kriging method. Kriging is an interpolation method which initially developed for using in spatial statistics. Thismethod which is initially proposed by Krige [15]was applied by Cressie [5] in geology. Later it was applied in areas like economic and modeling black box computer experiments [9]. The efficiency of this method in deterministic simulation models has proven [11,20,26]. Based on the good results obtained in deterministic simulation, Beer and Klijnen [3] applied this method to the random simulation models. With considering successful application of Kriging methodology in design and analysis of deterministic computer experiments Ankenman et al [1]. Introduced stochastic Kriging (SK) metamodel inspired by Kriging methodology. Actually they extend Kriging methodology which applied successfully in deterministic simulation as a global metamodel, to the stochastic simulation. Despite of high accuracy and successful results, SK has not been use as metamodel in numerous papers and has not been use to any of the commercial simulation software package. A powerful population based metaheuristics which can be employed for solving large-scale optimization problems efficiently is nested partition (NP) method. This method introduced by Shi and Ã?"lafsson D [21] inspired by adaptive partitioned random search (APRS) [27] and branch and bound algorithm. Concentrating search effort in some regions which are most likely to have global optimum has been the key idea in developing NP method and this goal can be achieved by partitioning. NP method has been used in both deterministic [21] and stochastic [22] optimization problems and also has been successfully applied in many areas, such as planning and scheduling, logistics and transportation, supply chain design, data mining and health care and task assignment. This method also is used for solving some difficult optimization problems like traveling sales man problem [23] and production scheduling problems [21]. More details about these applications are provided in [4,11]. Based on Shi and Ã?"lafsson [25] the NP results significantly depend on the method of sampling and low quality samples can affect the final result. In the young literature of NP, using other heuristics in order to improve samples quality is common. Using heuristic search method for optimizing simulation models has been very successful approaches. The majority of researches in this area focus on some well-known algorithms like genetic algorithm (GA), simulated annealing (SA), and particle swarm optimization (PSO). PSO method is a global population-based metaheuristic which developed by Kennedy [12] and Eberhart [6] in 1995for optimizing nonlinear programming problems. The key idea of this algorithm was derived from movement of flying birds. There is a population (swarm) which consists of some particles. These particles are representative of solutions in feasible region. Each particle position and velocity updated based on the best performance of the particle achieving so far (its own previous best position) and the best performance obtained by any other particles. In this paper we developed a metamodel-base simulation optimization hybrid algorithm. The developed algorithm is hybrid by the fact that it is constructed by modification and combination of several optimization routines. We used stochastic Kriging (SK) metamodel for fitting a functional relation to the input output of our simulation models. SK actually is a global metamodel, since it is fitted on the whole feasible space. We then incorporated the NP method into our hybrid algorithm. Through the NP method, the developed hybrid algorithm will concentrate its search effort in the regions which are most likely contained the global optimum. And finally we integrated the PSO method as the searching mechanism for improving the searching process of the proposed hybrid algorithm. Through these integrations a hybrid algorithm for optimization of the digital simulation models is developed, which is described in the next sections of this manuscript in more detailed. # II. # Problem Statements In this article we focus on the simplest optimization problem which is an unconstrained simulation model with the continuous variables and a single output. We aim to minimize the expected value of this univariant output. Despite the simplicity, these kinds of problems have many applications in practice. Examples are(s, Q)inventory management simulation, inventory production simulation models in logistics and operation management and queuing simulation models [13]. This problem is formulated as follow: x respectively. )) ( ( min x w E x ) 1 ( j j j u x l ? ? d j ,..., 2 , 1 = ) 2 ( III. # Development of the Hybrid Algorithm The developed hybrid algorithm is a metamodel-based optimization process. We used the stochastic Kriging (SK) metamodel and employed a sequential experiment design for validating its results. In optimization part we used the hybrid metaheuristic algorithm of nested partition (NP) method, which benefits of quick convergence for large scale problems with high computational efforts and we also used the advantages of Particle swarm optimization (PSO) algorithm for its simplicity and good local search ability. During the process of developing the hybrid algorithm, we first developed an algorithm with the integration of the NP and the PSO algorithm, and we called it PSPO algorithm [19]. The efficiency of the PSPO algorithm was then tested through a computation experiment. As the result, we found that although the solution obtained by this algorithm was either optimal or closed to the optimal, but the computational efforts to obtain the solution were extensive. Due to this problem we directed the path of our research to the metamodel based algorithm and we developed the hybrid algorithm. Fig. 1 represents a flow diagram of the developed hybrid algorithm. According to Fig. 1, the first part of the hybrid algorithm includes initial sampling (step 1) and simulating this sample points with a number of specified iteration (step2). The second stage is an iterative process for fitting metamodel using the input output data (step3) and then validating metamodel by the method developed by Liu, Nelson, and Staum [18]. Their method is actually based on cross-validation method and developed for stochastic Kriging metamodels (step4). Finally the last stage, algorithm starts with primary partitioning of feasible space (step 5) and then sampling and improving these sample points in each partition (step6). By using the validated metamodel and improved samples we computed the fitness value of each partition and then determine the best partition (the most promising partition) and consequently the best point (step7). Actually this point was used for re-partitioning the solution space and adds it to the experiment design process, which can be used for updating the metamodel (step8). After this step, the algorithm checks the existence of any significant improvement. If a significant improvement is not achieved, the hybrid algorithm performs a pre-specified number of simulation runs, denoted by max I , and then stops and accepts the best obtained point. Otherwise it loops back to step 6. For presenting a detailed description of the algorithm steps, let us show the whole feasible space by?, the number of algorithmic iteration by k and the most promising partition by ). (k ? Now, the following sections provide more details for each step. # Step 1 : Performing the initial experiment design The first step of the hybrid algorithm includes generating some initial simulation observations, based on a statistical sampling. Selecting an efficient experiment design is an important step in the process of developing a metamodel and can affect the hybrid algorithm performances. Since we intend to use stochastic Kriging metamodel and based on the assumption that there is no former information about output values, we employed the max-min Latin Hyper Cube sampling [13] while checking the box constraints. Considering a ddimensional space, we used the sample size suggested by Kleijnen [14]. Therefore the sample size was designated as 5+2d of the initial sample points. In step 5 we will explain more detail regarding the sequential experiment design. Step 2 : Simulating the initial design points Stochastic simulation models have stochastic output values, so we should obtain an average of simulation outputs over the number of replications. If we denote n i as the number of simulation runs in a design point x i , the average will be obtained by: ( ) ( ) (3) 1 i n l i l i n x w x w i ? = = Where ( ) i l x w represent simulation output in the l th iteration. Fordetermining the number of simulation run (replication) in each design point, we used the method of Liu et al. [18] through which the output variance is reduced. At first we performed 0 n replications for every design point i x . Then we calculated mean ( ) w . We then used the following condition, proposed by Law [16], for obtaining i n : ( ) ? ? ? + ? 1 ) ( ; , i i i x W n x l (4) And therefore i n is obtained as: ( ) ( ) ( ) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + = ? ? 2 0 2 1 , 1 0 1 , max i i n i x W x S t n n ? ? ? (5) As the result, 0 n n i ? more replication runs in the design point i x should be executed. After determining design points and number of replication in each design point, simulation of all the design points can be executed. ( ) ( ) ( ) ? ? ? ? ? ? = ? m m n x V n x V n x V Diag ,..., , ?2 2 1 1 ? where ( ) ( ) ( ) , ) ( 1 1 ?1 2 ? = ? ? = i n j i i j i i x w x w n x V 2 ? , ? ?, 0 ? are estimated by solving the following likelihood equation: For more details, we refer the reader to Ankenman, Nelson, and Staum [1]. ( ) ( ) [ ] ( ) [ ] ( ) ( ) [ ]() (7) , 1 1 2 1 ln 2 1 2 ln , , 0 1 2 0 2 2 2 0 k M T m M m w R w R l ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? + ? ? = ? Development Step 4 : Validating of the metamodel For checking the validity of metamodel we used the leave-one-out cross-validation method. The main goal of this method is to decrease the difference between real simulation output in a specific design point, # ), ( i x w and the metamodel prediction for the same design point, after eliminating the point designated by ). ( ?) ( i i x w ? In other words, the key idea is to control the relative leave-one-out prediction error at each design point which is calculated as follow: ) ( ) ( ) ( ?) ( i i i i x w x w x w ? ? ) 8 ( Liu and coworkers (2010) demonstrated achieving this goal need to control i E value which is calculated as follow: (9) ) ; , ( ) ( ) ( ) ( ) ; , ( ) ( ) ; , ( ) ( ? ? ? i i i i i i i i i i i i n x l x w x w x w n x l x w n x l E ? ? + ? = ? Where the half-width is a measure of uncertainty in ). # ( i x w Before cross-validation we should separate design points which are not on the edges, i.e. for design point ) , , , ( 2 1 d i x x x x ? = in the d dimensions, we select point which none of the factors contains the extreme value. This subset of designing points is showed by set II and i E is calculated for this subset. Based on equation ( 9) the lack of credibility of i E originated from two sources: ? Relative difference between simulation output and true value (relative precision of simulation output) which need to add more replication. ? Relative difference between true value and leaveone-out prediction which need to add more design point. In x S for all design points in set II. 9) for all design point in set II. ), ( 1 # Calculate i E as in equation ( + k x w and ) ( 1 2 + k x S . c) 1 + ? k k . d) Go to step 4. # Step 5 : Initial partitioning In optimization process first we should partition experimental area. Different methods have been developed for partitioning in the NP algorithm. In this article for initial partitioning we divided the range of each variable, which is defined by the box constraint, into two halves. Through this way each variable's dimension is divided into two equal parts. Then we use the SK algorithm for determining the potential areas. Step 6 : Sampling and improving samples After defining partition boundaries, a uniform random sampling procedure was used for each partition. In this article the PSO algorithm is used to improve the quality of initial samples and generate feasible solutions which improve the effectiveness and efficiency of the NP algorithm. Suppose we have N initial random samples in the j th partition denoted by }. , , { 2 1 jN I J I j I j I x x x D ? = Using PSO algorithm, the hybrid algorithm generates a sequence of improving solutions until no further improvement is possible. The final solutions which are more likely to be near optimal solution is shown by }. , , { 2 1 jN F J F j F j F x x x D ? = By using high quality samples we increase the probability of selecting correct partition and making correct moves. Assuming simulation runs are computationally very time consuming, in the process of running the PSO, we used the metamodel for estimating outputs instead of using the simulation run outputs. It should be noted that in the next iterations (except first iteration) we will have five partitions for sampling. More details provide in step8. # Step 7 : Determining winner partition and near optimal solution Final population is used for estimating promising index and finally determining the winner partition (next most promising region). To achieve this goal we use the validated metamodel to estimate the output for all the final samples which are shown by ) ( Ji F x Y for i th sample in j th partition. The best answer for each partition shows fitness coefficient for that partition. For example in minimization problems it is calculated as follow: ? = = ? j x Y Y ji F N i j ?(10) 5 , , 2 , 1 ) ( min ) ( } , 2 , 1 { ? And the partition with the best overall fitness coefficient is chosen as the next most promising partition, with the index as follow: (11) 5 , , 2 , 1 ) ( arg ?? = ? j Y J j k ? If this index corresponds to a sub region of , 4 ), ( ? k J k ? we let this to be the next most promising partition, ie: (12) ) ( ) 1 ( ?k k k J ? ? = + But if it belongs to complementary region we backtrack to previous iteration, ie: (13) ) 1 ( ) 1 ( ? = + k k ? ? All partitions features in different iterations are saved.Also the backtrack-flag is set on. This flag is used for counting the number of back tracking simulation runs. We also record the best point which is obtained. The index of the best point in j th partition is as follow: (14) 5 , , 2 , 1 min arg ?} , 2 , 1 { ? ? = = ? j x J ji F N i i So the best overall point will be denoted by k J k i J F opt k x x = which will be used in next partitioning (step 8). We also compare this point with the best answer which is obtained so far ) ( opt x and we record the point which has the better performance measure. # Step 8 : Updating the metamodel and re-partitioning solution space In this step the near-optimal point which is obtained from previous step, ), ( opt k x is simulated and then it will be added to the experimental design. With this new experimental design we re-fit and re-validate the metamodel. This new metamodel which may be As we explained before, if the current nearoptimal point belongs to the complementary region we back track and we use previous partitioning, but if this point belongs to the promising region, the algorithm partitions the promising region into four new partitions (four rectangles) somehow that the obtained best point becomes one of the corner of these rectangular while the other corners are kept as the corners of the promising region. It is clear that these regions may not have equal area. All the remaining parts of the region except these four partitions are aggregated together and will be formed as the complementary partition. Having these five partitions, the algorithm checks the stopping rule and if the stopping rule is not satisfied returns back to step 6 for the new sampling. # Step 9 : Checking stopping rule The hybrid algorithm is iteratively proceeds until no significant improvement is realized. In order to check this condition the algorithm compares the objective function value of the current best point, ), ( opt k x w with the best point obtained among all the previous iterations, # ). ( opt x w To conduct this it first calculate thetstudent prediction error as follow: (15) )) ( ( )) ( ( ?) ( ) ( 0 opt opt k opt opt k x w v x w v x w x w t + ? = The current best point is accepted as the best solution point until now, only if df t t , 1 0 ? ? < , otherwise, we conclude that there was not any improvement, where is the degree of freedomand is calculated as follow: (16) ) , min( opt k n n df = In our experiments we set ?=0.1. We also set a threshold for maximum number of unimproved replications equal to 30, i.e. I max = 30. Whenever this threshold is reached, the iterations are terminated. The algorithm is stopped and the best solution point is acknowledged as the final solution point ). ( opt x IV. # Computational Experiments For evaluating the performances of the developed algorithm, it is common to employ some known response surface functions, which are presented in the optimization literatures. We selected 10 complex test problems as the bench mark. Due to deterministic nature of these problems, we add a noise to their objective function relations. The main advantage of using these test problems was their complexity of obtaining their optimal solutions. In this section the developed hybrid algorithm is evaluated against three known algorithms cited in the literatures. Ten unconstrained hard test problems with the following mathematical structures were selected from the literatures. 1. P1; which has just one global minimum, with the following mathematical form: ( ) Each test problem was solved using the developed algorithm and three above mentioned algorithms. To overcome the problem of stochastic output of the simulation models, twenty replication of simulation model was considered and the test problems are solved by each method. The performance criteria were selected as the number of simulated points and the quality of final results. 2 2 2 1 1 1 1 x x x P + + = 2 2 ? ? ? j x 2 , 1 = j ( )) ( ) ( ) ( ) ( ) ( ) ( ) 2 , 1 for 3 3 - 1 exp 3 1 exp 5 10 1 exp 1 3 2 2 2 1 2 2 2 1 5 2 3 1 1 2 2 2 1 2 1 2 = ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? = j x x x x x x x x x x x P j ( )) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 19 1,2 for 3 3 - 1 exp 3 1 exp 5 15 1 exp 1 10 2 2 2 1 2 2 2 1 5 2 3 1 1 2 2 2 1 2 1 3 = ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? = j x x x x x x x x x+ ? + ? ? ? ? ? ? ? + ? ? ? ? ? ? ? = x x x x P ? ? ? ( )26 Table (1) summarizes the computational results over 20 replications for the four considered algorithms in term of the best result. As it is seen in eight out of ten test problems the developed hybrid algorithm provided the optimal solutions. For two other problems, problems #4 and #5, the objective function values are very closed to the optimal, less than 0.1 and 0.3 percents. Table (2) summarizes the computational results over 20 replications for the four considered algorithms in term of the average best result. As it is seen in four out of ten test problems the developed hybrid algorithm provided the average solutions equal to the optimal solutions. For the other problems, except problem #3 and #5 the average solutions have the deviations less than 1.1 percent from the optimal solutions. For problem #3 the dilation is 4.06 per cent from the optimal solution. Only for problem #5 we obtained a solution point which is far away from the optimum which is caused to have an average solution with unacceptable deviation from the optimal solution. Table (3) summarizes the number of simulated points for obtaining the final solution. As we emphasized earlier, it is assumed that the simulation runs are The efficiency of the developed hybrid algorithm was then evaluated through computational experiments. Six complex test problems were selected from the literatures and the efficiency of the developed hybrid algorithm was evaluated by comparing its performances against three other algorithms. Two algorithms were selected from the existing literatures, and one algorithm was a preliminary developed algorithm by the authors. The result of these computational experiments revealed that the developed hybrid algorithm can provide very robust solutions with a very low computational effort. ![denotes a ddimensional input vector and w indicates the output value. The expected value in the objective function is estimated by simulation and we consider only box constraints where j l and j u are the lower and upper bounds of the input variables j](image-2.png "=") © 2014 Global Journals Inc. (US) © 2014 Global Journals Inc. (US) Year 2014 expensive or from computation points of view they are assumed to be very time consuming. Therefore the number simulation points for obtaining the optimal solution is very critical performance measure in evaluation of the optimization algorithm. As it is seen considering this performance measure, the developed hybrid algorithm outperforms the other thee algorithms. Actually in most of the ten test problems, the developed algorithm provided more than 80 percent saving in the number of simulated points comparing with the average number of points simulated by the other three algorithms. V. ## Conclusions and Remarks In this paper, a metamodel based hybrid algorithm was developed for optimization of the digital computer simulations models. It is assumed that the considered simulation models are computationally expensive, and have a single output function which is nonlinear continuous unconstrained function. By the computationally expensive we mean that, each simulation run is required an extensive computational time. The hybrid algorithm is developed by modifying and integrating of several concepts and routines. We incorporated NP and PSO algorithms to develop an efficient search mechanism and we modified K metamodel to be applied in stochastic simulationoptimization model (SK), and is integrated to the search mechanism as a metamodel for facilitating the function fitting processes of the simulation's output. As the result a hybrid algorithm is developed. * Stochastic Kriging for simulation metamodeling BLAnkenman BLNelson JStaum Operations Research 58 2002 * Metamodel-Based Simulation Optimization; Handbooks in Operations Research and Management Science RRBarton MMeckesheimer 2006 Elsevier/North Holland * Kriging interpolation in simulation: a survey WC MBeers JP CKleijnen Proceedings of the 36th Conference on Winter Simulation the 36th Conference on Winter Simulation 2004 * Optimization and Logistics Challenges in the Enterprise WChen LPi LShi 2009 Springer Optimization and its Applications Dordrecht Heidelberg London, New York * The origins of Kriging NCressie Mathematical Geology 22 1990 * RCEberhart JKennedy A new optimizer using particle swarm theory, Proceeding of 6th International Symposium on Micro Machine and Human Science 1995 * MCFu FWGlover J Simulation optimization: a review, new developments, and applications. Proceeding of Winter Simulation Conference April. 2005 * Handbooks in Operations Research and Management Science SGHenderson BLNelson 2006 Elsevier North Holland * Global optimization of stochastic black-box systems via sequential Kriging metamodels DHuang TTAllen WINotz NZheng Journal of Global Optimization 34 2006 * An example of simulation optimization using a neural network metamodel: finding the optimum number of kanbans in a manufacturing system RDHurrion Journal of the Operational Research Society 48 1997 * Efficient global optimization of expensive black-box functions DRJones MSchonlau WJWelch Journal of Global 13 1998 * JKennedy RCEberhart Particle swarm optimization. Proceeding of the IEEE International Conference on Neural Network 1995 * Response surface methodology for constrained simulation optimization: an overview. Simulation Modeling, Practice and Theory16 JP CKleijnen 2008 * Constrained optimization in expensive simulation; Novel approach JP CKleijnen WBeers INieuwenhuyse European Journal of Operation Research 202 2010 * Kriging metamodeling for simulation DGKrige 1950 Faculty of Economics and Business Administration, Tilburg University, Netherlands Ph.D. Dissertation * AMLaw Simulation Modeling and Analysis. Fourth Mcgraw-Hill Boston 2007 * Simulation optimization based on Taylor Kriging and evolutionary algorithm HLiu SMaghsoodloo Applied Soft 11 2011 * Simulation on demand for pricing many securities MLiu BLNelson JStaum Proceedings of the 2010 Winter Simulation Conference the 2010 Winter Simulation Conference 2010 * Development of PSPO simulation optimization algorithm ZOmranpour FGhassemi-Tari F Journal of Industrial & Systems Engineering 5 2012 * The Design and Analysis of Computer Experiments TJSantner BJWilliams WINotz 2003 Springer-Verlag New York * Nested partitions method for global optimization LShi SÃ?"lafsson Operation 48 1998 * Nested partitions method for stochastic optimization LShi SÃ?"lafsson Methodology and Computing in Applied Probability2 2000 * A New Parallel Randomized Algorithm for Traveling Salesman Problem LShi SÃ?"lafsson NSun Computer and Operations 26 2000 * LShi SÃ?"lafsson Nested Partitions Method. Theory and Applications; International Series in Operations Research & Management Science Springer Publisher 2008 * Nested Partition Optimization. Encyclopedia of Optimization LShi SÃ?"lafsson 2009 Springer * Kriging metamodels for global approximation in simulation-based multidisciplinary design optimization TWSimpson TMMauery JJKorte FMistree AIAA Journal 39 2001 * Adaptive partitioned random search to global optimization ZBTang IEEE Transaction on Automatic 39 1994 * ETekin ISabuncuoglu Simulation optimization: a comprehensive review on theory and applications. IIE Transactions36 2004 * The art of writing a scientific article JVan Der Geer JA JHanraads RALupton J. Sci. Commun 163 2010 * The Elements of Style WStrunkJr EBWhite 2000 Longman 32 31 New York fourth ed.. Reference to a chapter in an edited book * How to prepare an electronic version of your article GRMettam LBAdams B.S. Jones, R.Z * Introduction to the Electronic Age New York E-Publishing Inc 2009