# Exploring a Different Kind of Calculus

And Deriving a Beautiful Identity in the Process

Since *Leibniz* and *Newton* developed Calculus, the scientific world and all its disciplines have not been the same.

From physics and engineering to biology and mathematics, calculus permeates through the modern sciences as an indispensable tool in the search for truth and inventions.

However, there are different ways of looking at it. From a physical and practical view, it has to do with the concept of *rate of change* and that leads to applicability in physics through laws written down in the form of differential equations.

From a purely mathematical perspective, we have several ways of looking at it. The act of differentiating a function can be seen as a transformation from one space of functions to another. I won’t go into detail about function spaces and what the word “space” is covering, but it suffices to say that a set of functions displays some of the same properties as a space of vectors with an underlying field e.g. ℝ.

This transformation is a certain kind of mapping *d/dx* with the following two important properties

These properties together are called linearity.

In analogy, the real function *f(x) = kx *where k is some real number also satisfies the linearity conditions above in the sense that

It also bears an analogy to linear transformations in the subject of *linear algebra *where the objects of study are linear transformations between vector spaces e.g. matrices. And matrices satisfy the linearity conditions as well.

The first property is a very important one and what this is intuitively saying is that it preserves the structure of a function space with respect to the operation of addition.

So this is like a dictionary between two worlds. Translating additive structure from one world to the other. This kind of function is called a homomorphism.

What we will do in this article is defining another kind of operator. A transformation between function spaces that instead of resembling the linear function above, resembles the logarithm.

We shall also derive the various rules analogous to the differential operator defined above such as the product rule, chain rule, etc. It turns out that our operator also displays homomorphic behavior but from a multiplicative function space to an additive one.

The operator that I am talking about is called the logarithmic derivative and is defined by the following

Let’s state some of the nice properties that this operator has before using it.

A natural question is what it does to constants. But as is easily shown,

Since it inherits that from the normal differential operator.

The most important property is that it takes products to sums and divisions to subtractions.

Which you can show using the definition and the usual rules of differentiation.

When a constant is multiplied on a function, the constant vanishes when we logarithmically differentiate it.

This also means that we can change the order of subtraction between functions in the operator argument i.e.

Let’s now state the chain rule for this operator.

The derivation of this is quite simple.

It is quite nice how well it resembles the chain rule for the differential operator. We also have the power rule:

In particular, we have the following two useful formulas

At this point, you might be wondering what the *eigen* functions of this operator are.

Of course, for the normal differential operator, it is the scaled (natural) exponential function in the sense that

but for the logarithmic derivative, it turns out that

for all constants *c, *making these functions *eigen* elements in the function space we defined the operator on.

Now that we have some rules of the logarithmic derivative in our toolbox let’s use them.

Let’s first find the logarithmic derivative of the *sine *function. Using the definition, we quickly obtain

You might remember that the *sine* function can be written as an infinite product.

Now, using the above rules we can transform the infinite product into a series by taking the logarithmic derivative on both sides.

We get,

We have used several of the rules defined above. The power rule, the product to sum, commuting subtraction, vanishing constant, and the chain rule, respectively.

Technically, we need an argument to be sure that the product to sum rule holds on the infinite product. It turns out that the following is enough.

I will state it first and then explain it after.

- The factors inside the product need to be holomorphic (i.e. analytic) on an open subset
*D*of the complex plane - None of the factors is identically zero on
*D* - The product converges
*locally uniformly*to a function*f*

If these conditions are true, then the corresponding series that we get by taking the logarithmic derivative on both sides converges locally uniformly on the following set

The backslash here means set-difference.

This was a little technical. What does it mean?

First of all, that a complex function is holomorphic means that it is “complex differentiable". That turns out to be a stronger requirement than real differentiation and in fact, if a complex function is differentiable then it is differentiable infinitely many times which means that it is analytic (i.e. it has a power series expansion). This is not true for real functions in general!

That the factors are not identically zero on *D *simply means that it does not map an open set to zero*.*

It turns out that for a holomorphic function, if it is zero on any open subset, of an open connected set *D ⊆ *ℂ on which the function is defined, then it is zero everywhere on *D*! That is no matter how small the subset of *D* is. This is not true for real functions.

So informally speaking, a holomorphic function has global information in an arbitrarily small subset of it’s domain.

The *Identity Theorem** *above* *is extremely important and is used in the proof of the fact that some analytic functions have an *analytic continuation*.

The *locally uniform convergence *is a more technical requirement so I will not write it out in full detail but it suffices to say that it is a weaker requirement than *uniform convergence *but a stronger requirement than *pointwise convergence, *thus if the product converges uniformly, then it converges locally uniformly.

I’ll leave the details as an exercise for the reader.

If we get back to the resulting series, it would be quite nice if we hadn’t that π lying around in the denominator, but a substitution and a multiplication reveals this identity in a more famous form, namely

*Leonhard Euler* was the first to find this series. He also used the infinite product representation of the *sine* function. You can read more about Euler’s beautiful discoveries here.

The logarithmic derivative operator is being used extensively in complex analysis and analytic number theory and is a very important tool in the study of *The Riemann Zeta Function*. It is also used together with contour integration to calculate the number of zeros and poles of *meromorphic functions* in closed contours.

Using our rules for this operator, we can find a lot of other series from infinite products.

How about you give it a try?