Maximum and Minimum Values of a Function

Maximum and Minimum Values of a Function

Maximum and Minimum Values of a Function

Where is the “best” point of a function: the maximum you want to achieve or the minimum you need to avoid? That question, which arises in optimization, physics, economics, and engineering, is one of the main applications of differential calculus. And here is the key point: the Weierstrass Theorem guarantees that, if f is continuous and you work on a closed and bounded interval, then absolute extrema exist. From there, the process becomes practical: learning to detect local extrema using critical points (f'(x)=0 or does not exist) and applying tools such as Rolle’s Theorem and the Mean Value Theorem to transform a “blind” search into a clear, verifiable, and efficient method.

Learning Objectives:

  1. Execute a complete procedure to find absolute extrema on [a,b]: evaluate f at interior critical points and at the endpoints of the interval, and compare values to determine absolute maximum and minimum.
  2. Contrast the value of a necessary condition versus a sufficient one: recognize that “f'(x_0)=0” does not guarantee a local extremum, and decide which additional evidence (value comparison, sign analysis, local behavior) is relevant in each case.
  3. Determine the most efficient strategy according to the type of problem: absolute extrema on compact intervals (Weierstrass + finite evaluation) versus local extrema at interior points (critical points + local analysis), justifying the choice.

CONTENT INDEX:
Maximum and minimum values, absolute and local extrema
First Derivative Test
Rolle’s Theorem
The Differential Mean Value Theorem
Intervals of increase and decrease

The Weierstrass Theorem guarantees that, if a real function is defined and continuous on a closed and bounded subset of \mathbb{R}, then it necessarily attains maximum and minimum values (absolute extrema). The search for maximum and minimum values of a function is known as an optimization problem, and the Weierstrass Theorem guarantees the existence of solutions in the sense of absolute extrema, provided that the function is continuous and the domain is compact. With existence ensured, what remains is to develop strategies that allow these solutions to be found.


Maximum and minimum values, absolute and local extrema

Before beginning to review strategies for the search for maximum and minimum values, let us clearly define what we want to identify.

DEFINITION:
Let f be a function with domain D. We say that f attains an absolute maximum at a point x_0\in D if:

\left( \forall x \in D \right)\bigl(f(x) \leq f(x_0)\bigr)

and it attains an absolute minimum at x_0 if:

\left( \forall x \in D \right)\bigl( f(x_0) \leq f(x)\bigr)

In an analogous manner, local extrema (relative to the domain) are defined.

DEFINITION:
Let f be a function with domain D and let x_0\in D. We say that f attains a local maximum at x_0 if:

(\exists h>0)\left( \forall x\in [x_0-h, x_0+h] \cap D \right)\bigl(f(x) \leq f(x_0)\bigr)

and it attains a local minimum at x_0 if:

(\exists h>0)\left( \forall x\in [x_0-h, x_0+h] \cap D \right)\bigl( f(x_0) \leq f(x)\bigr)

From this, we can state the following result:

THEOREM:

Let x_0 be a point
interior to a compact interval I. If f attains a local maximum or minimum at x_0 and f^\prime(x_0) exists, then f^\prime(x_0)=0.

PROOF:
Suppose that f attains a local maximum at x_0. Then there exists h_0 \gt 0 such that, for every h with |h|\lt h_0 and with x_0+h\in I, the following holds:

f(x_0 + h)\leq f(x_0)

which is equivalent to:

f(x_0 + h) - f(x_0)\leq 0

Let us now consider two cases:

  • If h>0, then:

    \displaystyle \frac{f(x_0 + h) - f(x_0)}{h}\leq 0

  • If h\lt 0, then:

    \displaystyle \frac{f(x_0 + h) - f(x_0)}{h}\geq 0

If f^\prime(x_0) exists, then the limit of the difference quotient as h\to 0 exists and must be compatible with both inequalities, which forces:

\displaystyle f^\prime(x_0)=\lim_{h\to 0}\frac{f(x_0 + h) - f(x_0)}{h}= 0

This is what was to be proved.

It should be noted that this proof is also valid for local minima. In that case, one begins with: f(x_0+h)\ge f(x_0) for |h| sufficiently small.


First Derivative Criterion


The result we have just reviewed
can be summarized in the following implication:

\left\{\begin{matrix}f \text{ attains a}\\ \text{local extremum at }x_0 \end{matrix}\right\} \Longrightarrow \left\{\begin{matrix} \displaystyle f^\prime(x_0) = 0 \\ \\ \vee \\ \\ \text{The derivative does not exist at }x_0 \end{matrix}\right\}

Although the converse of this implication is not true in general, it is very useful when it comes to narrowing the search for local extrema. Based on this, the critical points of the first derivative are defined.

DEFINITION:
It is said that x_0 is a critical point of the first derivative if f^\prime(x_0)=0 or if f^\prime(x_0) does not exist.

The critical points of the first derivative are relevant because every point at which the function attains an extremum (locally or absolutely) must belong to the set of critical points:

\left\{\begin{matrix}\text{points that}\\ \text{attain absolute extrema}\end{matrix}\right\} \subseteq \left\{\begin{matrix}\text{points that}\\ \text{attain local extrema}\end{matrix}\right\} \subseteq \left\{\begin{matrix}\text{critical points of the}\\ \text{first derivative}\end{matrix}\right\}

This is what we call the first derivative criterion, understood as a necessary condition for the existence of local extrema at interior points.


Rolle’s Theorem


We have already seen that the determination
of critical points of the first derivative is key in the search for local extrema. For this reason, it is natural to investigate under which conditions the existence of such critical points can be guaranteed. Progress in this direction comes from Rolle’s Theorem.

THEOREM:
Let f be a function defined and continuous on [a,b], and differentiable on ]a,b[. If f(a)=f(b), then there exists c\in]a,b[ such that f^\prime(c)=0.

PROOF:
We will analyze two possibilities:

  1. If for all x\in]a,b[ it holds that f(x)=f(a)=f(b), then f is constant and, consequently, f^\prime(x)=0 for all x\in]a,b[. In particular, there exists c\in]a,b[ with f^\prime(c)=0.
  2. If there exists x\in]a,b[ such that f(x)\neq f(a)=f(b), then f is not constant. Since f is continuous on [a,b], by the Weierstrass Theorem it attains an absolute maximum and an absolute minimum on [a,b].

    Additionally, since f(a)=f(b) and f is not constant, at least one of these extrema must occur in the interior ]a,b[.

    Thus, if c\in]a,b[ is an interior point where f attains a local extremum. Since f is differentiable on ]a,b[, in particular f^\prime(c) exists, and by the previous theorem it follows that f^\prime(c)=0.


The Differential Mean Value Theorem


Another result that is a direct consequence
of those we have just reviewed, and which provides useful information for the study of functions, is the Mean Value Theorem for differential calculus.

THEOREM:
Let f be a function defined and continuous on [a,b], and differentiable on ]a,b[. Then there exists c\in]a,b[ such that:

f^\prime(c) =\displaystyle \frac{f(b) - f(a)}{b-a}

PROOF:
Let F be the function defined by:

F(x) = f(x) - \displaystyle \frac{f(b) - f(a)}{b-a}(x-a)

This function is continuous on [a,b] and differentiable on ]a,b[ because f is as well. Moreover, F(a)=F(b), so we can apply Rolle’s Theorem to conclude that there exists a point c\in]a,b[ such that F^\prime(c)=0.

Now, differentiating F yields:

F^\prime(x) = f^\prime(x) - \displaystyle\frac{f(b) - f(a)}{b-a}

Evaluating at c and using F^\prime(c)=0:

0=F^\prime(c) = f^\prime(c) - \displaystyle\frac{f(b) - f(a)}{b-a}

Hence:

f^\prime(c) = \displaystyle\frac{f(b) - f(a)}{b-a}

which is what was to be proved.


Intervals of increase and decrease

THEOREM:

  • If f is a function such that (\forall x\in ]a,b[)\left(0\lt f^\prime(x)\right), then f is strictly increasing on ]a,b[.
  • If f is a function such that (\forall x\in ]a,b[)\left(f^\prime(x)\lt 0\right), then f is strictly decreasing on ]a,b[.

PROOF:
Let x_1,x_2\in ]a,b[ be such that x_1 \lt x_2. Since f is differentiable on ]a,b[, we can apply the Mean Value Theorem to f on the interval [x_1,x_2]\subset ]a,b[. Consequently, there exists a point c\in]x_1,x_2[ such that:

f^\prime(c) = \displaystyle\frac{f(x_2) - f(x_1)}{x_2 - x_1}

From this:

  • If f^\prime(c) \gt 0, then f(x_2) - f(x_1) = f^\prime(c)(x_2 - x_1) \gt 0.
    Therefore, f is increasing.
  • If f^\prime(c) \lt 0, then f(x_2) - f(x_1) = f^\prime(c)(x_2 - x_1) \lt 0.
    Therefore, f is decreasing.

Studying maxima and minima is not merely “taking derivatives,” but rather learning how to transform a diffuse search into a procedure with guarantees and clear criteria. Weierstrass tells you when you can rely on the existence of an optimum on a compact interval, while the first derivative criterion, Rolle’s Theorem, and the Mean Value Theorem provide the map for identifying candidates and justifying conclusions: where a function may attain extrema, when that condition is only necessary, and how the sign of f' reveals increase and decrease. If you master this chain of ideas, you move from viewing graphs intuitively to solving optimization problems with verifiable arguments, which is precisely the difference between “I think the best point is here” and “I know why it must be here.”

Views: 0

Leave a Reply

Your email address will not be published. Required fields are marked *