Molecular Monday: What Color Are Atoms?

December 5, 2016

Molecular Mondays Header

Welcome to Molecular Monday! On the first Monday of the month, we take a closer look at the atoms and molecules that make up our physical universe. Today, we’re looking at:

CPK Coloring

When I first introduced this Molecular Monday series, I knew I’d be drawing a lot of atoms and molecules, but I wasn’t sure if there was a right way or a wrong way to draw them. For starters, I wasn’t even sure what colors I should use.

Atoms and molecules do not really have colors, in the sense that they’re too small for visible light to reflect off them. At one point, I wondered if I should color them based on their spectroscopic signatures, but that line of research got complicated really fast.

Eventually I discovered that chemists have a (mostly) standardized color-coding system for modeling the atoms in a molecule. It’s called CPK coloring, in honor of Robert Corey, Linus Pauling, and Walter Koltun. Apparently Corey and Pauling created this system in the 1950’s, and Koltun improved it by adding more colors in the 1960’s (improving things by adding more colors is basically what the 60’s were all about).

So following the CPK coloring scheme, hydrogen atoms are white, and oxygen atoms are red. (Example: water molecule, H2O.)


Nitrogen atoms are blue. (Example: molecular nitrogen, N2.)


Sulfur atoms are yellow. (Example: hydrogen sulfide molecule, H2S.)


And carbon atoms are either black or grey. I draw them in grey because otherwise you couldn’t see their little smiley faces. (Example: benzene molecule, C6H6.)


Personally, I think it would make more sense to switch the colors of oxygen and nitrogen. That way, water molecules would have blue in them, rather than bright red. But otherwise, CPK coloring is a pretty good system.

Typically green is assigned to either chlorine or fluorine, or sometimes both. Beyond that, modern chemists seem to have strayed from the original psychedelic system Koltun invented. I guess the rarer an element is, the less we worry about sticking to a standardized color code.

For my purposes, that hasn’t been a problem. Almost every molecule I write about on this blog is composed of carbon, hydrogen, and maybe oxygen and/or nitrogen. Occasionally sulfur gets into the mix, but that’s basically it.

For next month’s Molecular Monday post, I think we’ll continue looking at some of the other issues involved with drawing molecules. I settled on ball-and-stick models, but that’s not the only way to do things.

The EM Drive: Is It for Real?

November 29, 2016

This weekend, I read the recently published paper on NASA’s “impossible” EM drive. Or rather, I read about the “closed radio-frequency resonant cavity” designed and tested by Eagleworks Laboratories (which is part of NASA).

Basically, this closed radio-frequency thing is a box with radio waves bouncing around inside it. Because of the box’s unusual shape, the radio waves end up pushing more on one side of the box than the other, which generates thrust. Supposedly. Even though that violates conservation of momentum.

This post is a review of the paper itself, and nothing more, because I’ve found that responsible scientists and quack scientists often reveal themselves in the way they write their papers. And whatever else might be going on with this physics-defying new engine design, the paper does not appear to be quack science.

  • Experimental methods and equipment are documented in meticulous detail, and sections are included describing “force measurements procedures” and “force measurement uncertainty.”
  • The researchers appear to be presenting all of their data, or at least they don’t appear to be deliberately hiding anything. They also make a point of explaining the data analysis techniques they used.
  • There’s a lengthy section on potential sources of experimental error. The paper explains how each possible error was corrected, or it tells us why the researchers believe the error is not statistically significant. The important thing is that these possible experimental errors are acknowledged to the reader.

Now I’m not a scientist or an engineer, so I can’t personally evaluate the data being presented here. But the fact that the Eagleworks team share so much information and go into such extensive technical detail is a good sign (even though it makes for rather dull reading).

It means they’re not asking us to just take their word for it. Anyone with the necessary knowledge, resources, and technical skills could evaluate the data for themselves or attempt to recreate the experiment in order to independently verify the test results. And that’s how science is supposed to be done.

That does not necessarily mean the EM drive works. A paper like this should be seen as the opening of a conversation. The Eagleworks team discovered something. Something that seems to violate conservation of momentum, or perhaps undermines the Copenhagen interpretation of quantum mechanics.

Follow up papers will continue the conversation, most likely by investigating those possible sources of error the Eagleworks team mentioned, or by trying to find sources of error the Eagleworks team may have overlooked. And my guess is that the conversation will end at that point.

But if it turns out the EM drive really does work, if the test results can’t be explained away by an experimental error, then the conversation will move on to trying to figure out what’s wrong with our current understanding of the laws of physics.

Regardless of how this plays out, it’s always good to see real scientific discourse in action.

I Think You’ll Find It’s a Bit More Complicated Than That — A Book Review

August 29, 2016

Today I thought I’d try doing a book review. Not really my thing, but since I read a lot of sciency books anyway, why not blog about them? I’m going to start with a book called I Think You’ll Find It’s a Bit More Complicated Than That by Ben Goldacre.

I picked this book up based solely on the title. It expresses bluntly exactly how I feel about the portrayal of science in the popular press and in popular culture in general.

The book is actually a collection of articles, most of which originally appeared in the Guardian. Goldachre tackles news reports, advertisements, and quack scientists in an effort to show how scientific data get oversimplified or misinterpreted by the media and others. As a result, real science morphs into pseudoscience, and pseudoscience masquerades as real science.

A lot of the book seems to confirm a thought that I’ve had before (and written about before): be wary of purported scientists who won’t show their methods or data. Science is about sharing as much as possible, not protecting your secret recipes for cancer “cures” or whatever.

There was one common crime against science that I was not previously aware of: misleading press releases. Even reputable institutions conducting legitimate research have P.R. departments, and these P.R. departments will occasionally (or perhaps not so occasionally) overhype scientific discoveries in their press releases.

I intend to be far more skeptical of press releases in the future. I also intend to pick up more of Goldachre’s books: Bad Science and Bad Pharma. Even though these books are outside my primary field of interest (planetary science), I’ve come to believe that the best way to understand how science does work (or at least should work) is to examine science gone wrong.

Real Science vs. Fake Science

June 14, 2016

In writing this blog, I’m trying to teach myself science. Real science. At least, enough real science to be able to write competent science fiction.

My00 Astro-James

Since most news articles about science are embarrassingly unreliable (damn those shruggies!), I end up reading a lot of scientific papers. And there’s something I’ve noticed. It’s like there’s a pattern to how scientific papers are written (at least, the legitimate ones).

Science Done Right

Taken as a whole, scientific papers sort of read like this:

Hey, I (or we) just noticed this weird thing which might have implications for how we think about other things. Here’s my (or our) best guess about what’s going on here, and here’s all the details so you can check this weird thing out for yourself. Hopefully we (the scientific community) can get to the bottom of this mystery.

A recent paper on the Planet Nine hypothesis is a great example (click here). In the paper, researchers explain that they’ve noticed something odd happening in the scattered disk region of our Solar System.

The researchers’ best guess is that an as-yet-undetected planet is perturbing the scattered disk. They then present all their data. All of it. Not just the parts that support their hypothesis. This shows that the researchers didn’t cherry-pick data to suit their idea. And in the end, the paper suggests new lines of research that could help prove or disprove this whole Planet Nine thing.

Doing Science Wrong

I’ve also encountered another kind of paper, a paper that reads more like this:

I (or we) hereby proclaim a new discovery which proves (or disproves) this other thing. End of discussion.

Sometimes these papers will also say things like:

We did an experiment. You can trust that we did it right. Here is some of our data; just the stuff that we believe is relevant.

And often, these papers will end with a line like:

Why, oh why, is the scientific community conspiring against me (or us) to hide the truth?

Real Science vs. Fake Science

In order to understand how real science works, you have to also learn a little about fake science so that you can tell the difference.

Fortunately, fake science is fairly easy to identify. There are so many red flags: bold proclamations, lack of detail concerning experimental methods, withholding experimental data that is deemed “irrelevant.” The whole “conspiracy to hide the truth” thing comes up a lot too. According to fake scientists:

Anyone who disputes my brilliant theory must be part of the conspiracy!

Meanwhile, real scientific papers tend to feel like a conversation. Mind you, it’s not always a polite conversation. One paper might be an opening argument, the next a rebuttal, and so forth. Scientific egos bruise easily, it seems, but eventually some sort of consensus is achieved.

At least until someone notices another weird thing which might have implications for whatever the consensus opinion turned out to be.