NONLINEAR FRACTIONAL PROGRAMMING

NONLINEAR FRACTIONAL PROGRAMMING

1997 | I. M. Stancu-Minasian
This chapter discusses nonlinear fractional programming, specifically the problem of maximizing the ratio $ q(\boldsymbol{x}) = \frac{f(\boldsymbol{x}) + \alpha}{g(\boldsymbol{x}) + \beta} $ over a set $ S \subseteq \mathbb{R}^n $. It assumes $ g(x) + \beta > 0 $ for all $ x \in S $, and that the objective function has a finite optimal value. The methods for solving this problem are generalizations of those in Chapter 3. These methods are categorized into three types: a) Variable transformation methods, which convert the problem into an easier-to-solve form, described in Sections 4.2 and 4.3 for homogeneous functions. b) Direct methods, which treat the problem as a nonlinear programming problem. If $ f $ and $ g $ satisfy certain concavity or convexity conditions, the function $ q $ has useful properties, such as pseudoconcavity, which ensures that every local maximum is a global maximum. Gradient methods and linearization techniques are used to find the optimal solution. The SUMT method is also mentioned. c) Parametric methods, which involve solving a parametric problem $ Q(\lambda) $, where $ \lambda $ is a parameter. The optimal solution of the original problem is determined by solving this parametric problem for various values of $ \lambda $ (Section 4.5). Section 4.1 presents necessary and sufficient optimality conditions for the problem. It states that a point $ x_0 $ is optimal if there exist scalars $ u_i $ such that $ g_i(x_0) \leq 0 $, $ \nabla q(x_0) = \sum_{i=1}^m u_i \nabla g_i(x_0) $, $ \sum_{i=1}^m \nabla u_i g_i(x_0) = 0 $, and $ u_i \geq 0 $. The proof shows that these conditions are necessary and sufficient for optimality.This chapter discusses nonlinear fractional programming, specifically the problem of maximizing the ratio $ q(\boldsymbol{x}) = \frac{f(\boldsymbol{x}) + \alpha}{g(\boldsymbol{x}) + \beta} $ over a set $ S \subseteq \mathbb{R}^n $. It assumes $ g(x) + \beta > 0 $ for all $ x \in S $, and that the objective function has a finite optimal value. The methods for solving this problem are generalizations of those in Chapter 3. These methods are categorized into three types: a) Variable transformation methods, which convert the problem into an easier-to-solve form, described in Sections 4.2 and 4.3 for homogeneous functions. b) Direct methods, which treat the problem as a nonlinear programming problem. If $ f $ and $ g $ satisfy certain concavity or convexity conditions, the function $ q $ has useful properties, such as pseudoconcavity, which ensures that every local maximum is a global maximum. Gradient methods and linearization techniques are used to find the optimal solution. The SUMT method is also mentioned. c) Parametric methods, which involve solving a parametric problem $ Q(\lambda) $, where $ \lambda $ is a parameter. The optimal solution of the original problem is determined by solving this parametric problem for various values of $ \lambda $ (Section 4.5). Section 4.1 presents necessary and sufficient optimality conditions for the problem. It states that a point $ x_0 $ is optimal if there exist scalars $ u_i $ such that $ g_i(x_0) \leq 0 $, $ \nabla q(x_0) = \sum_{i=1}^m u_i \nabla g_i(x_0) $, $ \sum_{i=1}^m \nabla u_i g_i(x_0) = 0 $, and $ u_i \geq 0 $. The proof shows that these conditions are necessary and sufficient for optimality.
Reach us at info@study.space