Lately, I’ve been trying to get through the fog of the log as I analyze my data, using logarithmic scales on my graphs to “unscrunch” my data. LOGARITHMS don’t come *naturally* to many (including me), but, as I’m frequently being reminded, they’re really *common* and useful. Logs and exponents have a lot of uses, the most obvious having to do with exponential growth and decay, where something is doubling (or halving) or tripling (or “thirding”?) at a fixed rate. The logarithm is the “power” you’re taking the something to each time (e.g. 2 for doubling, 3 for tripling). That something can be cells in a dish or money in the bank, so scientists & accountants both have logarithms to thank!
Logarithms are a different way of expressing exponents, which “emphasize” a different part. An exponent says, multiply me by me exponent number of times. The logarithm asks how many times did you multiply me by me. In more official terms, an exponent is the thing by which something is “raised” and the base is that thing you’re raising. (I know, I know, there are way too many meanings of “base”!) And the “answer” you get is called the argument
For example, take a base of 2.
2² means multiply 2 2’s together -> 2*2 = 4
2³ means multiply 3 2’s together -> 2*2*2 = 8
2⁴ means multiply 4 2’s together -> 2*2*2*2 = 16
In all of those cases, the base is 2, but the exponent’s changing. If we want to “emphasize” the thing that’s actually changing, we can rewrite them in logarithmic form using the below formula (note: a carrot ^ means “to the power of” and is helpful when you can’t easily use superscript to make the nice little shifted-up numbers)
if b is the base, c is the exponent, and a is the argument… logb(a) = c is the same as b^c = a
A way to help remember this is that the base (b) is always going to be “Below” something -> in logarithmic form it’s in subscript “below” the (a) and in exponential form it’s in “normal script” but there’s a superscript c above it. Then you just have to remember to “flip” the other 2 numbers.
so, in exponential form, we can write 2³ = 8 and we can write the exact same thing in logarithmic form as log₂(8) = 3
As you can see, the log is telling us how many times we had to multiply the base by itself to get the argument. We had to multiply 2 by itself 3 times to get 8.
Some calculations are easier to do when you’re in the exponential form. Other calculations are easier in the logarithmic form. So it’s nice to have a way to go between them. There are other useful properties of logs, some of which I show in the pics (these include being able to turn multiplication into addition and division into subtraction)
log₅(125). What’s the argument? How many times do you have to multiply 5 by itself to get 125? 3 (5*5 = 25 25*5 = 125). so log₅(125) = 3, which we can also write as 5³ = 125, which we can also write as 5*5*5 = 125.
If you use a base of 10, that’s called the COMMON LOGARITHM. If you see log(something) without a specified base, depending on the field, it’s assumed the base is 10 (but sometimes people write log for ln so be careful).
Why so common? Think about the decimal system. Each time you move the decimal point 1 place to the right, you’re multiplying by 10 and if you move the decimal point 1 place to the left you’re dividing by 10, which is the same as multiplying by 1/10.
If you’re playing a guessing game, you might guess by orders of 10 (e.g is it bigger than 10? 100? 1000? 10000?…) If you tried to graph those guesses on a linear scale, all those little numbers would get scrunched together in the corner and the bigger number would be all by itself super far away. That’s not very helpful to look at, so you can plot it on a log10 scale. This way you’re looking at how many powers of 10 there are (the exponent), not the final number (the argument). So with a base of 10, you’re asking how many times do I have to multiply some number by 10 to get to some other number. Basically you’re counting 0s.
I’ve always been kinda uncomfortable with logarithmic scales, but they’re really helpful for graphing the data from my binding experiments. This is because, in those experiments, I do a serial dilution, so I go from 1 to 1:2, to 1:4, to 1:8, etc… This allows me to cover a wide range of concentrations, but if I graph it on a normal scale it’s hard to see the closer-together points – there’s a lot less of a difference in absolute terms between 1/16 & 1/32 than there is between 1 & 1/2. Graphing on a log scale makes it a lot easier to see all the different points. Even though I did 1:2 dilutions, I can use a log scale with any base and still have the descrunching because you can interconvert between them and change the numbers on the axis without changing the actual curve, just where the tick marks are.
So, I start to get comfortable with log10 and then bam – they throw in an “e” – don’t let this scare you away. e is just a number (~2.718) and there’s even a convenient button for it on calculators. It’s kinda like π in that it’s a super-useful number that has a lot of digits and we use it a lot so we give it a letter instead of having to write out those digits each time. Just like “e” is just another name for that number, ln (standing for natural logarithm) is just a logarithm with base e.
So what’s so special about e? Why does it get its own names? A big part of the reason derives from the derivative!
Take a line (doesn’t have to be straight (and is more interesting if it isn’t). The slope of the line is its steepness. The derivative of the line is how that steepness changes over time. For a straight line the slope is the same everywhere. But, in biology, we often encounter the situation where our line starts fairly flat, but then ramps up exponentially. Exponentially is just a fancy way of saying a number multiplies itself by some number a certain number at a constant rate
e.g. If a bacterium has a division time of 30 minutes, every 30 minutes, it will have doubled. So you go from 1 to 2. Then 30 minutes later each of those will have doubled, so you go from 2 to 4. Then 30 minutes later, each of those will have doubled, so you go from 4 to 16. Even though you’re applying the same “rules” as time goes on, the outcome starts changing more quickly. So you get something like _/ (except curvier).
if you want to know how many of something there are at a given time you need to know many there were at a previous time, how long it takes them to double, and how long it’s been since that previous time. In “exponential growth”, the *rate* at which the population changes is constant over time in terms of how long it takes a single bacteria to divide, but as you go along, you’re adding more and more input. And, because you’re adding more and more input, you have more and more output. Each time a cell splits in 2, it doubles the number of cells that will split in 2 in the next time period. Your input is changing on a linear scale, but your output is changing on a logarithmic scale.
It’s easier to work with straight lines than with curves, and to get a straight line we just need something that changes linearly. We have something like that here (the number of times we double increases linearly with the time)- we’re just emphasizing the “wrong part” – if we plot the graph on a “normal” scale, we’re emphasizing the “argument” but if we plot it on a semilog scale we can emphasize the “exponent” – the “semi” just means we’re only logging one axis. As shown in the figures, we can either just scrunch the axis or we can scrunch the data
How does this relate to e? With some mathematical rearrangements, we get to something called the exponential growth equation, Nt = N₀e^(kt), where Nt is the population at some time, t; N₀ is the population at the original time; and k is the growth rate
Growth rate vs doubling time – these two terms are related but different. In our bacteria example, doubling time is how long (on average) it takes 1 cell to split in 2. Growth rate tells you how fast a collection of cells is growing. When a cell splits, it doesn’t wait around until its neighbors are ready to go. It does its own thing. So it starts doubling too. It’s a continuous function (like compounding interest which can also be modeled similarly).
Note: If we’re dividing in half instead or doubling, “growth rate” becomes “decay rate” and “doubling time” becomes “half life.” I deal with this a lot with radioisotopes (radioactive versions of elements we can use to label molecules to track them). Radioactive things get called radioactive because they let off radiation (a form of energy). But since they “give up” that energy, they’re depleting themselves. So they become less radioactive over time and the time it takes for half the radioactivity to be lost is the half life. So, when I’m working with RNA that I radiolabel with 32P, which has a half life of about 14 days, I know that in 14 days the signal from the radiation will only be about half as strong. And in about a month it will only be about 1/4 as strong. Eventually it gets so low I have to do a fresh labeling reaction.
As for e? Turns out that e is the only number whose derivative is itself. This means that the slope (steepness) of e is e. Which makes calculations e-sier, so we often convert other things into natural log terms.
This post is part of my weekly “broadcasts from the bench” for The International Union of Biochemistry and Molecular Biology. Be sure to follow the IUBMB if you’re interested in biochemistry! They’re a really great international organization for biochemistry.
If you want to learn more about all sorts of things: #365DaysOfScience All (with topics listed) 👉 http://bit.ly/2OllAB0