Limited Offer

# Constrained Optimization and Lagrange Multiplier Methods

- 1st Edition - November 28, 1982
- Author: Dimitri P. Bertsekas
- Editor: Werner Rheinboldt
- Language: English
- Hardback ISBN:9 7 8 - 0 - 1 2 - 0 9 3 4 8 0 - 5
- Paperback ISBN:9 7 8 - 1 - 4 8 3 2 - 3 6 1 3 - 1
- eBook ISBN:9 7 8 - 1 - 4 8 3 2 - 6 0 4 7 - 1

Computer Science and Applied Mathematics: Constrained Optimization and Lagrange Multiplier Methods focuses on the advancements in the applications of the Lagrange multiplier… Read more

## Purchase options

## Institutional subscription on ScienceDirect

Request a sales quoteComputer Science and Applied Mathematics: Constrained Optimization and Lagrange Multiplier Methods focuses on the advancements in the applications of the Lagrange multiplier methods for constrained minimization. The publication first offers information on the method of multipliers for equality constrained problems and the method of multipliers for inequality constrained and nondifferentiable optimization problems. Discussions focus on approximation procedures for nondifferentiable and ill-conditioned optimization problems; asymptotically exact minimization in the methods of multipliers; duality framework for the method of multipliers; and the quadratic penalty function method. The text then examines exact penalty methods, including nondifferentiable exact penalty functions; linearization algorithms based on nondifferentiable exact penalty functions; differentiable exact penalty functions; and local and global convergence of Lagrangian methods. The book ponders on the nonquadratic penalty functions of convex programming. Topics include large scale separable integer programming problems and the exponential method of multipliers; classes of penalty functions and corresponding methods of multipliers; and convergence analysis of multiplier methods. The text is a valuable reference for mathematicians and researchers interested in the Lagrange multiplier methods.

Preface

Chapter 1 Introduction

1.1 General Remarks

1.2 Notation and Mathematical Background

1.3 Unconstrained Minimization

1.3.1 Convergence Analysis of Gradient Methods

1.3.2 Steepest Descent and Scaling

1.3.3 Newton's Method and Its Modifications

1.3.4 Conjugate Direction and Conjugate Gradient Methods

1.3.5 Quasi-Newton Methods

1.3.6 Methods Not Requiring Evaluation of Derivatives

1.4 Constrained Minimization

1.5 Algorithms for Minimization Subject to Simple Constraints

1.6 Notes and Sources

Chapter 2 The Method of Multipliers for Equality Constrained Problems

2.1 The Quadratic Penalty Function Method

2.2 The Original Method of Multipliers

2.2.1 Geometric Interpretation

2.2.2 Existence of Local Minima of the Augmented Lagrangian

2.2.3 The Primal Functional

2.2.4 Convergence Analysis

2.2.5 Comparison with the Penalty Method—Computational Aspects

2.3 Duality Framework for the Method of Multipliers

2.3.1 Stepsize Analysis for the Method of Multipliers

2.3.2 The Second-Order Multiplier Iteration

2.3.3 Quasi-Newton Versions of the Second-Order Iteration

2.3.4 Geometric Interpretation of the Second-Order Multiplier Iteration

2.4 Multiplier Methods with Partial Elimination of Constraints

2.5 Asymptotically Exact Minimization in Methods of Multipliers

2.6 Primal-Dual Methods Not Utilizing a Penalty Function

2.7 Notes and Sources

Chapter 3 The Method of Multipliers for Inequality Constrained and Nondifferentiable Optimization Problems

3.1 One-Sided Inequality Constraints

3.2 Two-Sided Inequality Constraints

3.3 Approximation Procedures for Nondifferentiable and Ill-Conditioned Optimization Problems

3.4 Notes and Sources

Chapter 4 Exact Penalty Methods and Lagrangian Methods

4.1 Nondifferentiable Exact Penalty Functions

4.2 Linearization Algorithms Based on Nondifferentiable Exact Penalty Functions

4.2.1 Algorithms for Minimax Problems

4.2.2 Algorithms for Constrained Optimization Problems

4.3 Differentiable Exact Penalty Functions

4.3.1 Exact Penalty Functions Depending on x and λ

4.3.2 Exact Penalty Functions Depending Only on x

4.3.3 Algorithms Based on Differentiable Exact Penalty Functions

4.4 Lagrangian Methods—Local Convergence

4.4.1 First-Order Methods

4.4.2 Newton-like Methods for Equality Constraints

4.4.3 Newton-like Methods for Inequality Constraints

4.4.4 Quasi-Newton Versions

4.5 Lagrangian Methods—Global Convergence

4.5.1 Combinations with Penalty and Multiplier Methods

4.5.2 Combinations with Differentiable Exact Penalty Methods-Newton and Quasi-Newton Versions

4.5.3 Combinations with Nondifferentiable Exact Penalty Methods—Powell's Variable Metric Approach

4.6 Notes and Sources

Chapter 5 Nonquadratic Penalty Functions—Convex Programming

5.1 Classes of Penalty Functions and Corresponding Methods of Multipliers

5.1.1 Penalty Functions for Equality Constraints

5.1.2 Penalty Functions for Inequality Constraints

5.1.3 Approximation Procedures Based on Nonquadratic Penalty Functions

5.2 Convex Programming and Duality

5.3 Convergence Analysis of Multiplier Methods

5.4 Rate of Convergence Analysis

5.5 Conditions for Penalty Methods to Be Exact

5.6 Large Scale Separable Integer Programming Problems and the Exponential Method of Multipliers

5.6.1 An Estimate of the Duality Gap

5.6.2 Solution of the Dual and Relaxed Problems

5.7 Notes and Sources

References

Index

- No. of pages: 412
- Language: English
- Edition: 1
- Published: November 28, 1982
- Imprint: Academic Press
- Hardback ISBN: 9780120934805
- Paperback ISBN: 9781483236131
- eBook ISBN: 9781483260471

Read

*Constrained Optimization and Lagrange Multiplier Methods*on ScienceDirect