Geothermal model calibration using a global minimization algorithm based on finding saddle points and minima of the objective function
Manuel Plasencia a,n, Andreas Pedersen a, Andri Arnaldsson b, Jean-Claude Berthet b,
Hannes Jónsson a,c a Science Institute of the University of Iceland, VR-III, 107 Reykjavík, Iceland b Vatnaskil Consulting Engineers, Sudurlandsbraut 50, 108 Reykjavík, Iceland c Faculty of Physical Sciences, University of Iceland, VR-III, 107 Reykjavík, Iceland a r t i c l e i n f o
Received 4 March 2013
Received in revised form 8 September 2013
Accepted 11 September 2013
Available online 27 September 2013
Reservoir modeling a b s t r a c t
The objective function used when determining parameters in models for multiphase flow in porous media can have multiple local minima. The challenge is then to find the global minimum and also to determine the uniqueness of the optimized parameter values. A method for mapping out local minima to search for the global minimum by traversing regions of first order saddle points on the objective function surface is presented. This approach has been implemented with the iTOUGH2 software for estimation of models parameters. The methods applicability is illustrated here with two examples: a test problem mimicking a steady-state Darcy experiment and a simplified model of the Laugarnes geothermal area in
Reykjavík, Iceland. A brief comparison with other global optimization techniques, in particular simulated annealing, differential evolution and harmony search algorithms is presented. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction
The development of reservoir models often involves inverse modeling, i.e. an estimation of model parameters by fitting calculated values of the response of the system to measurements at discrete points in space and time. The difference between the model calculation and the measured data at the calibration points can be represented by an objective function of the model parameters. The task of estimating the best set of model parameters is thereby formulated as an optimization problem where the goal is to determine the parameter values that minimize the objective function. Even for models with only a few parameters, the resulting objective function can have more than one minimum. This is illustrated in Fig. 1, which shows a one-dimensional cut of an objective function for a geothermal reservoir model described below.
Within the parameter interval shown, three local minima are present. The occurrence of multiple local minima is more likely in models with a larger number of parameters. Hence, the task becomes to find the global minimum of the objective function among several local minima. This is a challenging problem. Furthermore, it is important to know whether additional local minima, with only insignificantly higher objective function values are present since they could, for practical purposes, represent nearly as good parameter values as the global minimum.
Numerical algorithms for optimization can be broadly categorized into local optimization methods and global optimization methods.
Local optimization algorithms involve an iterative process where starting from some initial guess, new parameter values are found so as to lower the value of the objective function. Such algorithms only find local minima, typically the local minimum nearest to the initial guess. Typically, local optimization methods rely on the evaluation of the gradient of the objective function. Some examples are steepest descent, conjugate gradient, Quasi-Newton and
Levenberg–Marquardt methods. By carrying out multiple minimizations starting from different initial guesses, such methods can be used to find the global minimum but this becomes an inefficient procedure when many parameters are varied.
Global optimization algorithms, on the other hand, attempt to find the global minimum by also allowing the increase of the objective function during the iterative procedure. Some examples are, simulated annealing using Markov chain Monte Carlo methods and evolutionary algorithms such as differential evolutionary, harmony search, and particle swarm optimization. These methods do not make use of the gradient of the objective function and tend to converge more slowly to minima of the objective function, but have the advantage over local optimization methods that they can identify the global minimum. Three of these algorithms will be briefly described here, the simulated annealing, differential evolution and harmony search algorithms. These are implemented in the
Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/cageo
Computers & Geosciences 0098-3004/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cageo.2013.09.007 n Corresponding author.
E-mail addresses: firstname.lastname@example.org, email@example.com (M. Plasencia).
Computers & Geosciences 65 (2014) 110–117 iTOUGH2 software, and will be compared with the global optimization method proposed here.
Simulated Annealing (Kirkpatrick et al., 1983) is an iterative procedure where an initial guess of the parameter values is iteratively updated with random increments and a selection criterion until a termination condition is reached. There, the objective function is taken to represent an ‘energy’ of the system, and a fictitious temperature is introduced. The temperature is introduced to control the probability of accepting increases in the objective function as an intermediate step to ultimately reach lower function values. A central issue in simulated annealing calculations is the ‘time scale’ of the cooling of the system from high temperature to zero temperature. The slower the cooling rate, the more likely the global minimum is found, but the computational effort becomes larger. It has been shown that in the impossible limit of infinitely long simulations with infinitesimal cooling rate, the method is guaranteed to give the global minimum (Haario and