From Surf Wiki (app.surf) — the open knowledge base
Barrier function
Continuous function whose value increases to infinity
Continuous function whose value increases to infinity
the topic in optimization
In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value increases to infinity as its argument approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. A barrier function is also called an interior penalty function, as it is a penalty function that forces the solution to remain within the interior of the feasible region.
The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods.
Motivation
Consider the following constrained optimization problem:
:minimize f(x) :subject to x ≤ b
where b is some constant. If one wishes to remove the inequality constraint, the problem can be reformulated as
:minimize f(x) + c(x), :where if x b, and zero otherwise.
This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, and therefore the objective function f(x) + c(x), is discontinuous, preventing the use of calculus to solve it.
A barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new optimization problem is formulated, viz.
:minimize f(x) + μ g(x)
where μ 0 is a free parameter. This problem is not equivalent to the original, but as μ approaches zero, it becomes an ever-better approximation.
Logarithmic barrier function
For logarithmic barrier functions, g(x,b) is defined as -\log(b-x) when x and \infty otherwise (in one dimension; see below for a definition in higher dimensions). This essentially relies on the fact that \log t tends to negative infinity as t tends to 0.
This introduces a gradient to the function being optimized which favors less extreme values of x (in this case values lower than b), while having relatively low impact on the function away from these extremes.
Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.
Higher dimensions
Extending to higher dimensions is simple, provided each dimension is independent. For each variable x_i which should be limited to be strictly lower than b_i, add -\log(b_i-x_i).
Formal definition
Minimize \mathbf c^Tx subject to \mathbf a_i^T x \le b_i, i = 1,\ldots,m
Assume strictly feasible: {\mathbf x|A x
Define logarithmic barrier g(x) = \begin{cases}
\sum_{i=1}^m -\log(b_i - a_i^Tx) & \text{for } Ax +\infty & \text{otherwise} \end{cases}
References
References
- Nesterov, Yurii. (2018). "Lectures on Convex Optimization". Springer.
- Nocedal, Jorge. (2006). "Numerical Optimization". Springer.
- Vanderbei, Robert J.. (2001). "Linear Programming: Foundations and Extensions". Kluwer.
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Barrier function — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report