Why is 8K screen resolution better than 4K, but an 8Å x-ray crystallography structure is worse than a 4Å one? When structural biologists talk about resolution, it means something related to, but distinct from when people talk about resolution in terms of digital displays (TVs, computer and phone screens, etc.) (and something different than when people talk about New Year’s Resolutions…) Both the first concerned with how well you are able to distinguish that different objects are actually different objects (can you tell that 2 close together dots are really 2 dots? Or do they “blur” into one? But they report it different ways….
Think about having 2 spots on a screen on the wall across the room from you. At first, you can easily see the spots are separate, ⚫️……….⚫️ but gradually bring the 2 spots closer to one another ⚫️……⚫️, ⚫️…⚫️, ⚫️⚫️ & at some point you will start to see them as 1 big blurry spot instead of 2 smaller spots. The distance between them at this point is the RESOLUTION LIMIT & it depends on how good your eyesight is. If spots are blurry to begin with, their “blur radii” will quickly overlap. Having higher resolution is like having better eyesight – the spots are less fuzzy, so you can still tell the spots are separate even when they’re really close together
In structural biology – and in vision – whether or not you can distinguish between them depends on how physically close together they are – and we call the “cutoff point” where you stop being able to tell them apart the resolution limit. The smaller the cutoff, the better the “viewer”’s ability to tell apart things – so the higher resolution. So in these cases, low numbers correspond to higher resolution. We’ll talk more about where this comes from in a bit but first I want to explain how this compares to digital display resolution.
With digital screens, whether or not you see 2 dots or one (assuming they’re far apart that your eyes can tell too) depends on whether the display shows 2 dots or 1. It’s kinda like one of those “LiteBright” toys where you have different color pegs you can light up – the smallest thing you can represent is a single peg. So if you want to display 2 dots that are smaller/closer together than the size of a single peg, you’re out of luck.
Instead of “light pegs,” digital screens use pixels, which are separately controllable “picture elements” that can display different colors and intensities of light. If pixels are smaller than the thing you want to show, multiple pixels can work together to show it, but you can’t show things smaller than a pixel (though some displays use subpixels which I”m not going to get into). So, even if your eyes would be able to tell that there were 2 dots in the area covered by a pixel, the computer can’t display it as 2 dots – it can only show you a single dot.
Screen resolution is usually reported as the pixel dimensions – like 10 pixels wide by 5 pixels tall. Though that would make a pretty lame screen! Instead common screen resolutions are 1280 x 720 (720p); 1920 x 1080 (1080p); 2560 x 1440 (1440p); 3840 x 2160 (4K or 2160p); 7680 x 4320 (8K or 4320p)
If something like a video or a picture is a higher resolution than a screen is capable of displaying (there are instructions for the computer to individually control more pixels than the screen has), it can be “downsampled” so that one pixel kinda averages multiple. And if there are more pixels available, those pixels just spread out the job without giving you better resolution.
Speaking of spreading out, when you make a screen bigger without changing the number of pixels, the pixels must be bigger. So even though the physical distance between pixels on opposite sides of the screen is bigger, you can’t display more different things between them – and even though the area of the pixels is bigger, you can’t stuff more elements into them – everything is just bigger. If you want to display finer detail, you need to increase the pixel density – pack more pixels into the same area. This is how you can get better crispness and it’s usually reported as pixels per inch (psi).
So, for screens, high resolution corresponds to high numbers, but for structural biology, high resolution refers to low numbers. And this can really trip people up (I know I found it confusing when I started out, being used to hearing about high resolution monitors, with companies bragging about their “megapixel” displays (displays with millions of pixels).
In structural biology, people are more likely to brag about 4 billionth of an inch resolution! We usually talk of resolution in terms of ANGSTROMS (Å) (10⁻¹⁰ meters, or 100 picometers (100pm), because this is the scale of the stuff we’re looking at.
Proteins are made up of atoms (of carbon (C), hydrogen (H), nitrogen (N), oxygen (O), & sulfur (S)). Atoms link up (with the help of protein enzymes (reaction mediators) to form protein building blocks called amino acids & those link up (in the order specified by genetic recipes) to form chains that fold up into proteins. The length of an average carbon-carbon single bond is ~1.54Å and the length of an average carbon-hydrogen bond is ~1.09Å. This is about 7000 times shorter than the wavelength of light, and, since light is only helpful for telling things apart that are more than half of its wavelength, we need light that’s ~3500 times shorter – so we turn to X-rays in a technique called X-ray crystallography. more here: http://bit.ly/xraycrystallographyres
In protein crystallography, we get proteins to give up their watery coats (undissolve) and organize themselves into crystals made up of lots of individual protein molecules organized in a repeating 3D pattern. And then we use x-rays to try to figure out where the atoms in a protein are. Atoms themselves are made up of smaller pieces (subatomic particles) – positively charged protons & neutral neutrons clump together in a central nucleus & negatively-charged electrons (e⁻) whizz around them
When we beam x-rays at protein crystals, the e⁻ scatter the x-rays. We record where those scattered rays hit a detector and this gives us a pattern of spots called the DIFFRACTION PATTERN. We work backwards from the spots to figure out where they scattered from to get an ELECTRON DENSITY MAP (a blobby thing showing the location of the e⁻). Then we build an ATOMIC MODEL of the protein (sticky or ribbony thing) into the map to “solve” the protein’s structure
e⁻ are what scatter the rays (they’re able to interact with the electric field of X-rays (a form of Electromagnetic radiation), but it’s the nuclei we’re really interested in. Thankfully the nuclei are the central hubs around which e⁻ concentrate, so if we can find where e⁻ are, we can “stick in” the nuclei they correspond to. How confidently we can do this depends on the resolution of our data. In HIGH RESOLUTION data, you can distinctly see the signal coming from individual atoms/ BUT at LOWER RESOLUTION, their signals blend together so you see something more sausagey. So you know the atoms are in that general vicinity, but you can’t make out where exactly.
So what determines the resolution? Sorry if I get too technical, but…
Pictures often look like X-rays are “bouncing off” of atoms in a protein – but at the really tiny level it’s not like billiard balls bouncing off the sides of a pool table. Instead it’s more like tossing a billiard ball into a pool and then watching the ripples. The ball isn’t bouncing off the water, but it is perturbing it. The electric field of X-rays perturbs the electron clouds surrounding the nuclei of atoms, causing them to vibrate and give off their own waves (in all directions, but it’s convenient to draw them in just 1).
But as these individual waves travel they join up w/other waves to give us a resultant overall 3D wave. We call this adding together of waves a FOURIER TRANSFORM. Waves add together through something called superposition. If the waves are completely “in phase” (peaking at the same time) they’ll add CONSTRUCTIVELY and the resultant wave’ll be amplified. But if they’re out of phase – some peaking while others trough-ing this can dampen the wave – DESTRUCTIVE INTERFERENCE.
Due to the orderly, repetitive spacing of crystals, for each mini wave there’s almost always a wave completely out of phase and thus totally destructing that wave. But if the spacing’s just right and the waves are in sync, they add together constructively and produce a stronger wave that we can capture on our detector as a spot. Now gets a little tricky theory-wise but makes things a lot easier to work with and think about model-wise, because although this mini-waving is what’s happening on the single atom level, we can *think* of “slicing” crystals into families of evenly-spaced planes that *do* act like pool table walls, bouncing off incoming x-rays and giving you a dot on your detector. If you just take my word for it for now, keep reading to learn about how this relates to resolution. But if you want more deets skip ahead or hold off and I will get into it in more detail for those interested.
Each dot on the diffraction pattern corresponds to a family of planes with spacing that meet “Bragg’s conditions” (more below) so they’re sometimes called Bragg’s planes. And they’re kinda like pixels in that they’re crystallography’s “smallest units of displayed info” & tell you about what’s contained between the planes – so the closer the slicing, the higher the resolution. BUT unlike pixels, which are physical, separate, entities, these planes can “overlap” because they’re not physical things. They don’t really exist, it’s just a mathematical way to represent crystals – like virtually slicing a banana. We get data from slices from different “angles” so we get information about all directions (like slicing a banana hotdog *and* hamburger style but a lot more ways than just 2).
The thinner the slices, the higher the resolution – so you should be able to just keep slicing thinner and thinner – but we don’t see an infinite number of dots?! Well, kinda like how it was easier to tell dots apart when they were physically further apart, the closer the plane spacing (thus higher resolution), the further apart their dots.
So the spots from close together planes (thinner slicing, finer detail) are further from the center of the diffraction pattern. it’s like a reverse dart board where the high-res info is further from the bulls-eye. There are a lot fewer of these high-res spots & their signal’s less intense, so in order to capture them, our detector needs to be big & sensitive enough. BUT if it’s too sensitive, it can pick up “artifacts” that aren’t really coming from the protein. At some point, the noise (σ) starts to outweigh the true signal (I), so if we were to include more points we’d be adding noise instead of information. We have to decide were to make this cut-off & there are different statistical measures (including I/σ) we use to decide what “ring on the dart board” (RESOLUTION SHELL) to cut off at
The reported resolution corresponds to where we make this cut-off in terms of what data to use – unlike with digital screens where the resolution is objective (can the screen display 2 dots or not?) we have to use some standardized subjectivity when figuring out resolution with this data – we have the data but when can we stop trusting it?
Often data quality measurements are reported for all the used data combined as well as for “highest resolution shell” (just outer ring) so readers can get a sense of whether they “cut” appropriately & didn’t try to use higher resolution “data” that was really just noise.
The cut-off’s really important because it limits what we even have a chance of seeing.
At low resolutions (higher numbers), we can make out things like the protein backbone. But it’s harder to be confident about the position of things like side chains (the unique parts that stick off the backbone). Once you get to higher resolutions, you start to be able to make out the side chains & their orientations. We call structures w/resolution at or better than 1.2 Å “atomic resolution” because we can confidently make out the location of all the atoms. BUT this is rare in protein crystallography – the average published structure is ~2 but some of the most exciting structures are lower resolution but of more “complex” things (which tend to be more “dynamic” and unwilling to sit nicely still in the precisely ordered arrangements required for crystallography) If even one of the thousands of individual protein molecules in a crystal is slightly different, it jumbles everything up
This is because scattered x-rays from different points on the protein interfere w/one another (either cancelling each other out or making the signal stronger) so that each diffraction spot contains some information about EVERY point in the protein, so the overall resolution corresponds to the whole protein BUT some regions can still be “fuzzier” than others – like places where atoms move around more. We can describe this local fuzziness w/something called the B-factor (aka displacement factor or temperature factor). Areas with a high B-factor are more “sausagey”
Sometimes regions of a protein (often places like flexible loops and “intrinsically disordered regions” move around so much we can’t even tell there are atoms there so we have to leave those “unresolved regions “of the protein out of the structure we build even though we know there are actually protein parts there. Sometimes scientists will show this as dashed lines in their models – so keep an eye out for it.
Back to the detail on the planes as promised
You know how in CT scans they take images in slices? Well. we can *imagine* slicing up a a protein similarly. You can imagine slicing a crystal into families of evenly-spaced planes that waves *do* bounce off of. All the beams would hit these planes at the same angle, so they’d bounce off at the same angle. But they’d hit the detector with their phase shifted (peaking earlier or later in relation to the other waves coming off of other planes in family. So the bounced off waves would then almost always cancel each other out. But families of planes with certain spacings will add together constructively. This happens when the conditions of “Bragg’s Law” are fulfilled.
Their phase shift depends on wavelength (λ), angle incoming wave hits (which is also angle it bounces off) (θ) & distance between planes (d). BRAGG’S LAW says that in order for constructive interference to occur,
nλ = 2dsinθ
Why? Incoming x-ray wave hits one plane before the under it. It has to travel further to reach lower plane & reflected ray will has to travel further to reach detector. Bragg’s law tells you the condition needed for combined length of “detour” to be an integer # (n) of λ so waves gets back in sync
And we call the families of planes that fit these conditions “Bragg’s planes” or “hkl planes” where hkl is a notation used to describe how many slices are made in each dimension. Each dot in a diffraction pattern corresponds to a family of planes. More here: http://bit.ly/2qFhlSG & http://bit.ly/2yFCfFx & http://bit.ly/2OfOUnO
I cover a lot of these topics in more detail in past posts. You can find them http://bit.ly/2OllAB0