Hermite-Genocchi Formula
Formulas
Let’s say we have different $x_{0}, \cdots , x_{n}$ for $f \in C^{n} \left( \mathscr{H} \left\{ x_{0}, \cdots , x_{n} \right\} \right)$. Then, for the standard simplex $$ \tau_{n} := \left\{ ( t_{1} , \cdots , t_{n} ) : t_{i} \ge 0 \land \sum_{i=1}^{t} t_{i} \le 1 \right\} $$ and $\displaystyle t_{0} = 1 - \sum_{i=1}^{n} t_{i}$, the following is true. $$ f [ x_{0}, \cdots , x_{n} ] = \int \cdots \int_{\tau_{n}} f^{(n)} ( t_{0} x_{0} + \cdots + t_{n} x_{n} ) dt_{1} \cdots dt_{n} $$
- $\mathscr{H} \left\{ a,b,c, \cdots \right\}$ represents the smallest interval that includes $a,b,c, \cdots$.
Explanation
The Hermite-Genocchi Formula generalizes the use of finite differences allowing for more flexibility than the complex equations it describes might suggest. In a way, the finite differences defined by the Hermite-Genocchi Formula are the actual subject, while the traditional finite differences provide a simpler way to understand this new concept.
Duplicate Data and Derivative Coefficients
If a finite difference $f [ \underbrace{ x_{i} , \cdots , x_{i} }_{ n+1 } ]$ can exist, then because $\displaystyle \sum_{i=0}^{n} t_{i} = 1$, $$ t_{0} x_{0} + \cdots + t_{n} x_{n} = t_{0} x_{i} + \cdots + t_{n} x_{i} = x_{i} $$ applies, and $f^{(n)} ( x_{i} )$ is treated as a constant, coming out of the integral, making $$ f [ \underbrace{ x_{i} , \cdots , x_{i} }_{ n+1 } ] = f^{(n)} ( x_{i} ) \int \cdots \int_{\tau_{n}} 1 dt_{1} \cdots dt_{n} $$ true.
Meanwhile, showing the integration range $\tau_{n}$ up to dimension $3$ illustrates the region represented by the standard simplex as seen in the image above. The volume of these $\displaystyle \int \cdots \int_{\tau_{n}} 1 dt_{1} \cdots dt_{n}$ is simply calculated as $\displaystyle \text{vol} ( \tau_{n} ) = {{1} \over {n!}}$, therefore $$ f [ \underbrace{ x_{i} , \cdots , x_{i} }_{ n+1 } ] = {{1} \over {n!}} f^{(n)} ( x_{i} ) $$ applies. This is for the case where all $x_{0}, \cdots , x_{n}$ are the same, which should be impossible according to the original definition of finite differences $$ f [ x_{0} , \cdots , x_{n} ] : = {{ f [ x_{1} , \cdots , x_{n} ] - f [ x_{0} , x_{n-1} ] } \over { x_{n} - x_{0} }} $$ since the denominator becomes $0$. However, the Hermite-Genocchi Formula disregards such concerns, providing the following corollaries:
- [3]’: $$f [ x_{i} , x_{i} ] = f ' ( x_{i} )$$
- [4]’: $$f [ \underbrace{ x_{i} , \cdots , x_{i} }_{ n+1 } ] = {{1} \over {n!}} f^{(n)} ( x_{i} ) $$
Moreover, from this discussion, there isn’t necessarily a requirement for all nodes to be identical. Finite differences are used because it’s challenging to calculate the derivative coefficients directly, so if any information about these coefficients is available, it’s better to utilize it. For instance, consider the form $$ f[x_{0}, x_{1}, x_{1}] = {{ {{ f(x_{1}) - f(x_{1}) } \over { x_{1} - x_{1} }} - f [ x_{1} , x_{0} ] } \over { x_{1} - x_{0} }} $$ which, while mathematically nonsensical, conceptually, the claim $$ {{ f(x_{1}) - f(x_{1}) } \over { x_{1} - x_{1} }} \equiv \lim_{h \to 0} {{ f(x_{1} + h) - f(x_{1}) } \over { ( x_{1} + h ) - x_{1} }} = f '(x_{1}) $$ is not outright absurd. Therefore, $$ f[x_{0}, x_{1}, x_{1}] = {{ f '(x_{1}) - f [ x_{1} , x_{0} ] } \over { x_{1} - x_{0} }} $$ suggests using whatever information is available about the derivative coefficients. This generalization to allow for duplicate nodes also opens up possibilities for differentiation, such as when there’s an unfixed variable $f[x_{0} , x_{1}, x]$, differentiating the finite difference with respect to $x$ yields $$ {{ d } \over { dx }} f[x_{0} , x_{1}, x] = \lim_{h \to \infty} {{ f[x_{0} , x_{1}, x+h] - f[x_{0} , x_{1}, x] } \over { h }} $$ Since the finite difference remains consistent regardless of the order of nodes, $$ {{ d } \over { dx }} f[x_{0} , x_{1}, x] = \lim_{h \to \infty} {{ f[x_{0} , x_{1}, x+h] - f[ x, x_{0} , x_{1}] } \over { (x+h) - x }} $$ according to the definition of finite differences, $$ {{ d } \over { dx }} f[x_{0} , x_{1}, x] = \lim_{h \to \infty} f[x, x_{0} , x_{1}, x+h] = f[x, x_{0} , x_{1}, x] $$ Generalizing this for $x_{0}, \cdots , x_{n}$ implies that $$ f’[x_{0}, \cdots , x_{n}, x] = f[x, x_{0}, \cdots , x_{n}, x] $$ differentialing the finite difference with respect to $x$ effectively adds another instance of $x$. This aligns intuitively with the fact that having more nodes in the finite difference is akin to taking more derivatives.
Derivation 1
The derivation itself can be straightforwardly achieved through mathematical induction but is skipped here due to the complexity and messiness of the equations and indices involved. It’s recommended to accept it as a fact and move on.
■
Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p145~146. ↩︎