Maximum and Minimum Values of a Function
Where is the “best” point of a function: the maximum you want to achieve or the minimum you need to avoid? That question, which arises in optimization, physics, economics, and engineering, is one of the main applications of differential calculus. And here is the key point: the Weierstrass Theorem guarantees that, if f is continuous and you work on a closed and bounded interval, then absolute extrema exist. From there, the process becomes practical: learning to detect local extrema using critical points (f'(x)=0 or does not exist) and applying tools such as Rolle’s Theorem and the Mean Value Theorem to transform a “blind” search into a clear, verifiable, and efficient method.
Learning Objectives:
- Execute a complete procedure to find absolute extrema on [a,b]: evaluate f at interior critical points and at the endpoints of the interval, and compare values to determine absolute maximum and minimum.
- Contrast the value of a necessary condition versus a sufficient one: recognize that “f'(x_0)=0” does not guarantee a local extremum, and decide which additional evidence (value comparison, sign analysis, local behavior) is relevant in each case.
- Determine the most efficient strategy according to the type of problem: absolute extrema on compact intervals (Weierstrass + finite evaluation) versus local extrema at interior points (critical points + local analysis), justifying the choice.
CONTENT INDEX:
Maximum and minimum values, absolute and local extrema
First Derivative Test
Rolle’s Theorem
The Differential Mean Value Theorem
Intervals of increase and decrease
The Weierstrass Theorem guarantees that, if a real function is defined and continuous on a closed and bounded subset of \mathbb{R}, then it necessarily attains maximum and minimum values (absolute extrema). The search for maximum and minimum values of a function is known as an optimization problem, and the Weierstrass Theorem guarantees the existence of solutions in the sense of absolute extrema, provided that the function is continuous and the domain is compact. With existence ensured, what remains is to develop strategies that allow these solutions to be found.
Maximum and minimum values, absolute and local extrema
Before beginning to review strategies for the search for maximum and minimum values, let us clearly define what we want to identify.
DEFINITION: \left( \forall x \in D \right)\bigl(f(x) \leq f(x_0)\bigr) and it attains an absolute minimum at x_0 if: \left( \forall x \in D \right)\bigl( f(x_0) \leq f(x)\bigr) |
In an analogous manner, local extrema (relative to the domain) are defined.
DEFINITION: (\exists h>0)\left( \forall x\in [x_0-h, x_0+h] \cap D \right)\bigl(f(x) \leq f(x_0)\bigr) and it attains a local minimum at x_0 if: (\exists h>0)\left( \forall x\in [x_0-h, x_0+h] \cap D \right)\bigl( f(x_0) \leq f(x)\bigr) |
From this, we can state the following result:
THEOREM: |
PROOF: f(x_0 + h)\leq f(x_0) which is equivalent to: f(x_0 + h) - f(x_0)\leq 0 Let us now consider two cases:
If f^\prime(x_0) exists, then the limit of the difference quotient as h\to 0 exists and must be compatible with both inequalities, which forces: \displaystyle f^\prime(x_0)=\lim_{h\to 0}\frac{f(x_0 + h) - f(x_0)}{h}= 0 This is what was to be proved. |
It should be noted that this proof is also valid for local minima. In that case, one begins with: f(x_0+h)\ge f(x_0) for |h| sufficiently small.
First Derivative Criterion
The result we have just reviewed can be summarized in the following implication:
\left\{\begin{matrix}f \text{ attains a}\\ \text{local extremum at }x_0 \end{matrix}\right\} \Longrightarrow \left\{\begin{matrix} \displaystyle f^\prime(x_0) = 0 \\ \\ \vee \\ \\ \text{The derivative does not exist at }x_0 \end{matrix}\right\}
Although the converse of this implication is not true in general, it is very useful when it comes to narrowing the search for local extrema. Based on this, the critical points of the first derivative are defined.
DEFINITION: |
The critical points of the first derivative are relevant because every point at which the function attains an extremum (locally or absolutely) must belong to the set of critical points:
\left\{\begin{matrix}\text{points that}\\ \text{attain absolute extrema}\end{matrix}\right\} \subseteq \left\{\begin{matrix}\text{points that}\\ \text{attain local extrema}\end{matrix}\right\} \subseteq \left\{\begin{matrix}\text{critical points of the}\\ \text{first derivative}\end{matrix}\right\}
This is what we call the first derivative criterion, understood as a necessary condition for the existence of local extrema at interior points.
Rolle’s Theorem
We have already seen that the determination of critical points of the first derivative is key in the search for local extrema. For this reason, it is natural to investigate under which conditions the existence of such critical points can be guaranteed. Progress in this direction comes from Rolle’s Theorem.
THEOREM: |
PROOF:
|
The Differential Mean Value Theorem
Another result that is a direct consequence of those we have just reviewed, and which provides useful information for the study of functions, is the Mean Value Theorem for differential calculus.
THEOREM: f^\prime(c) =\displaystyle \frac{f(b) - f(a)}{b-a} |
PROOF: F(x) = f(x) - \displaystyle \frac{f(b) - f(a)}{b-a}(x-a) This function is continuous on [a,b] and differentiable on ]a,b[ because f is as well. Moreover, F(a)=F(b), so we can apply Rolle’s Theorem to conclude that there exists a point c\in]a,b[ such that F^\prime(c)=0. Now, differentiating F yields: F^\prime(x) = f^\prime(x) - \displaystyle\frac{f(b) - f(a)}{b-a} Evaluating at c and using F^\prime(c)=0: 0=F^\prime(c) = f^\prime(c) - \displaystyle\frac{f(b) - f(a)}{b-a} Hence: f^\prime(c) = \displaystyle\frac{f(b) - f(a)}{b-a} which is what was to be proved. |
Intervals of increase and decrease
THEOREM:
|
PROOF: f^\prime(c) = \displaystyle\frac{f(x_2) - f(x_1)}{x_2 - x_1} From this:
|
Studying maxima and minima is not merely “taking derivatives,” but rather learning how to transform a diffuse search into a procedure with guarantees and clear criteria. Weierstrass tells you when you can rely on the existence of an optimum on a compact interval, while the first derivative criterion, Rolle’s Theorem, and the Mean Value Theorem provide the map for identifying candidates and justifying conclusions: where a function may attain extrema, when that condition is only necessary, and how the sign of f' reveals increase and decrease. If you master this chain of ideas, you move from viewing graphs intuitively to solving optimization problems with verifiable arguments, which is precisely the difference between “I think the best point is here” and “I know why it must be here.”
