173 stories

My Big Numbers talk at Festivaletteratura

1 Share

Last weekend, I gave a talk on big numbers, as well as a Q&A about quantum computing, at Festivaletteratura: one of the main European literary festivals, held every year in beautiful and historic Mantua, Italy.  (For those who didn’t know, as I didn’t: this is the city where Virgil was born, and where Romeo gets banished in Romeo and Juliet.  Its layout hasn’t substantially changed since the Middle Ages.)

I don’t know how much big numbers or quanutm computing have to do with literature, but I relished the challenge of explaining these things to an audience that was not merely “popular” but humanisitically rather than scientifically inclined.  In this case, there was not only a math barrier, but also a language barrier, as the festival was mostly in Italian and only some of the attendees knew English, to varying degrees.  The quantum computing session was live-translated into Italian (the challenge faced by the translator in not mangling this material provided a lot of free humor), but the big numbers talk wasn’t.  What’s more, the talk was held outdoors, on the steps of a cathedral, with tons of background noise, including a bell that loudly chimed halfway through the talk.  So if my own words weren’t simple and clear, forget it.

Anyway, in the rest of this post, I’ll share a writeup of my big numbers talk.  The talk has substantial overlap with my “classic” Who Can Name The Bigger Number? essay from 1999.  While I don’t mean to supersede or displace that essay, the truth is that I think and write somewhat differently than I did as a teenager (whuda thunk?), and I wanted to give Scott2017 a crack at material that Scott1999 has been over already.  If nothing else, the new version is more up-to-date and less self-indulgent, and it includes points (for example, the relation between ordinal generalizations of the Busy Beaver function and the axioms of set theory) that I didn’t understand back in 1999.

For regular readers of this blog, I don’t know how much will be new here.  But if you’re one of those people who keeps introducing themselves at social events by saying “I really love your blog, Scott, even though I don’t understand anything that’s in it”—something that’s always a bit awkward for me, because, uh, thanks, I guess, but what am I supposed to say next?—then this lecture is for you.  I hope you’ll read it and understand it.

Thanks so much to Festivaletteratura organizer Matteo Polettini for inviting me, and to Fabrizio Illuminati for moderating the Q&A.  I had a wonderful time in Mantua, although I confess there’s something about being Italian that I don’t understand.  Namely: how do you derive any pleasure from international travel, if anywhere you go, the pizza, pasta, bread, cheese, ice cream, coffee, architecture, scenery, historical sights, and pretty much everything else all fall short of what you’re used to?

Big Numbers

by Scott Aaronson
Sept. 9, 2017

My four-year-old daughter sometimes comes to me and says something like: “daddy, I think I finally figured out what the biggest number is!  Is it a million million million million million million million million thousand thousand thousand hundred hundred hundred hundred twenty eighty ninety eighty thirty a million?”

So I reply, “I’m not even sure exactly what number you named—but whatever it is, why not that number plus one?”

“Oh yeah,” she says.  “So is that the biggest number?”

Of course there’s no biggest number, but it’s natural to wonder what are the biggest numbers we can name in a reasonable amount of time.  Can I have two volunteers from the audience—ideally, two kids who like math?

[Two kids eventually come up.  I draw a line down the middle of the blackboard, and place one kid on each side of it, each with a piece of chalk.]

So the game is, you each have ten seconds to write down the biggest number you can.  You can’t write anything like “the other person’s number plus 1,” and you also can’t write infinity—it has to be finite.  But other than that, you can write basically anything you want, as long as I’m able to understand exactly what number you’ve named.  [These instructions are translated into Italian for the kids.]

Are you ready?  On your mark, get set, GO!

[The kid on the left writes something like: 999999999

While the kid on the right writes something like: 11111111111111111

Looking at these, I comment:]

9 is bigger than 1, but 1 is a bit faster to write, and as you can see that makes the difference here!  OK, let’s give our volunteers a round of applause.

[I didn’t plant the kids, but if I had, I couldn’t have designed a better jumping-off point.]

I’ve been fascinated by how to name huge numbers since I was a kid myself.  When I was a teenager, I even wrote an essay on the subject, called Who Can Name the Bigger Number?  That essay might still get more views than any of the research I’ve done in all the years since!  I don’t know whether to be happy or sad about that.

I think the reason the essay remains so popular, is that it shows up on Google whenever someone types something like “what is the biggest number?”  Some of you might know that Google itself was named after the huge number called a googol: 10100, or 1 followed by a hundred zeroes.

Of course, a googol isn’t even close to the biggest number we can name.  For starters, there’s a googolplex, which is 1 followed by a googol zeroes.  Then there’s a googolplexplex, which is 1 followed by a googolplex zeroes, and a googolplexplexplex, and so on.  But one of the most basic lessons you’ll learn in this talk is that, when it comes to naming big numbers, whenever you find yourself just repeating the same operation over and over and over, it’s time to step back, and look for something new to do that transcends everything you were doing previously.  (Applications to everyday life left as exercises for the listener.)

One of the first people to think about systems for naming huge numbers was Archimedes, who was Greek but lived in what’s now Italy (specifically Syracuse, Sicily) in the 200s BC.  Archimedes wrote a sort of pop-science article—possibly history’s first pop-science article—called The Sand-Reckoner.  In this remarkable piece, which was addressed to the King of Syracuse, Archimedes sets out to calculate an upper bound on the number of grains of sand needed to fill the entire universe, or at least the universe as known in antiquity.  He thereby seeks to refute people who use “the number of sand grains” as a shorthand for uncountability and unknowability.

Of course, Archimedes was just guessing about the size of the universe, though he did use the best astronomy available in his time—namely, the work of Eratosthenes, who anticipated Copernicus.  Besides estimates for the size of the universe and of a sand grain, the other thing Archimedes needed was a way to name arbitrarily large numbers.  Since he didn’t have Arabic numerals or scientific notation, his system was basically just to compose the word “myriad” (which means 10,000) into bigger and bigger chunks: a “myriad myriad” gets its own name, a “myriad myriad myriad” gets another, and so on.  Using this system, Archimedes estimated that ~1063 sand grains would suffice to fill the universe.  Ancient Hindu mathematicians were able to name similarly large numbers using similar notations.  In some sense, the next really fundamental advances in naming big numbers wouldn’t occur until the 20th century.

We’ll come to those advances, but before we do, I’d like to discuss another question that motivated Archimedes’ essay: namely, what are the biggest numbers relevant to the physical world?

For starters, how many atoms are in a human body?  Anyone have a guess?  About 1028.  (If you remember from high-school chemistry that a “mole” is 6×1023, this is not hard to ballpark—though you do need to remember that we have way more hydrogen atoms than we would were we made entirely of water.)

How many stars are in our galaxy?  Estimates vary, but let’s say a few hundred billion.

How many stars are in the entire observable universe?  Something like 1023.

How many subatomic particles are in the observable universe?  No one knows for sure—for one thing, because we don’t know what the dark matter is made of—but 1090 is a reasonable estimate.

Some of you might be wondering: but for all anyone knows, couldn’t the universe be infinite?  Couldn’t it have infinitely many stars and particles?  The answer to that is interesting: indeed, no one knows whether space goes on forever or curves back on itself, like the surface of the earth.  But because of the dark energy, discovered in 1998, it seems likely that even if space is infinite, we can only ever see a finite part of it.  The dark energy is a force that pushes the galaxies apart.  The further away they are from us, the faster they’re receding—with galaxies far enough away from us receding faster than light, meaning that any signals they emit can never reach us.

Right now, we can see the light from galaxies that are up to about 45 billion light-years away.  (Why 45 billion light-years, you ask, if the universe itself is “only” 13.6 billion years old?  Well, when the galaxies emitted the light, they were a lot closer to us than they are now!  The universe expanded in the meantime.)  If, as seems likely, the dark energy has the form of a cosmological constant, then it’s not just that the galaxies too much further away than 45 billion light-years can’t be seen by us right now—it’s that they can never be seen.

In practice, many big numbers come from the phenomenon of exponential growth.  Here’s a graph showing the three functions n, n2, and 2n:

The difference is, n and even n2 grow in a more-or-less manageable way, but 2n just shoots up off the screen.  The shooting-up has real-life consequences—indeed, more important consequences than just about any other mathematical fact one can think of.

The current human population is about 7.5 billion (when I was a kid, it was more like 5 billion).  Right now, the population is doubling about once every 64 years.  If it continues to double at that rate, and humans don’t colonize other worlds, then you can calculate that, less than 3000 years from now, the entire earth, all the down to the core, will be made of human flesh.  I hope the people use deodorant!

Nuclear chain reactions are a second example of exponential growth: one uranium or plutonium nucleus fissions and emits neutrons that cause, let’s say, two other nuclei to fission, which then cause four nuclei to fission, then 8, 16, 32, and so on, until boom, you’ve got your nuclear weapon (or your nuclear reactor, if you do something to slow the process down).  A third example is compound interest, as with your bank account, or for that matter an entire country’s GDP.  A fourth example is Moore’s Law, which is the thing that said that the number of components in a microprocessor doubled every 18 months (with other metrics, like memory, processing speed, etc., on similar exponential trajectories).  Here at Festivaletteratura, there’s a “Hack Space,” where you can see state-of-the-art Olivetti personal computers from around 1980: huge desk-sized machines with maybe 16K of usable RAM.  Moore’s Law is the thing that took us from those (and the even bigger, weaker computers before them) to the smartphone that’s in your pocket.

However, a general rule is that any time we encounter exponential growth in our observed universe, it can’t last for long.  It will stop, if not before then when it runs out of whatever resource it needs to continue: for example, food or land in the case of people, fuel in the case of a nuclear reaction.  OK, but what about Moore’s Law: what physical constraint will stop it?

By some definitions, Moore’s Law has already stopped: computers aren’t getting that much faster in terms of clock speed; they’re mostly just getting more and more parallel, with more and more cores on a chip.  And it’s easy to see why: the speed of light is finite, which means the speed of a computer will always be limited by the size of its components.  And transistors are now just 15 nanometers across; a couple orders of magnitude smaller and you’ll be dealing with individual atoms.  And unless we leap really far into science fiction, it’s hard to imagine building a transistor smaller than one atom across!

OK, but what if we do leap really far into science fiction?  Forget about engineering difficulties: is there any fundamental principle of physics that prevents us from making components smaller and smaller, and thereby making our computers faster and faster, without limit?

While no one has tested this directly, it appears from current physics that there is a fundamental limit to speed, and that it’s about 1043 operations per second, or one operation per Planck time.  Likewise, it appears that there’s a fundamental limit to the density with which information can be stored, and that it’s about 1069 bits per square meter, or one bit per Planck area. (Surprisingly, the latter limit scales only with the surface area of a region, not with its volume.)

What would happen if you tried to build a faster computer than that, or a denser hard drive?  The answer is: cycling through that many different states per second, or storing that many bits, would involve concentrating so much energy in so small a region, that the region would exceed what’s called its Schwarzschild radius.  If you don’t know what that means, it’s just a fancy way of saying that your computer would collapse to a black hole.  I’ve always liked that as Nature’s way of telling you not to do something!

Note that, on the modern view, a black hole itself is not only the densest possible object allowed by physics, but also the most efficient possible hard drive, storing ~1069 bits per square meter of its event horizon—though the bits are not so easy to retrieve! It’s also, in a certain sense, the fastest possible computer, since it really does cycle through 1043 states per second—though it might not be computing anything that anyone would care about.

We can also combine these fundamental limits on computer speed and storage capacity, with the limits that I mentioned earlier on the size of the observable universe, which come from the cosmological constant.  If we do so, we get an upper bound on ~10122 on the number of bits that can ever be involved in any computation in our world, no matter how large: if we tried to do a bigger computation than that, the far parts of it would be receding away from us faster than the speed of light.  In some sense, this 10122 is the most fundamental number that sets the scale of our universe: on the current conception of physics, everything you’ve ever seen or done, or will see or will do, can be represented by a sequence of at most 10122 ones and zeroes.

Having said that, in math, computer science, and many other fields (including physics itself), many of us meet bigger numbers than 10122 dozens of times before breakfast! How so? Mostly because we choose to ask, not about the number of things that are, but about the number of possible ways they could be—not about the size of ordinary 3-dimensional space, but the sizes of abstract spaces of possible configurations. And the latter are subject to exponential growth, continuing way beyond 10122.

As an example, let’s ask: how many different novels could possibly be written (say, at most 400 pages long, with a normal-size font, yadda yadda)? Well, we could get a lower bound on the number just by walking around here at Festivaletteratura, but the number that could be written certainly far exceeds the number that have been written or ever will be. This was the subject of Jorge Luis Borges’ famous story The Library of Babel, which imagined an immense library containing every book that could possibly be written up to a certain length. Of course, the vast majority of the books are filled with meaningless nonsense, but among their number one can find all the great works of literature, books predicting the future of humanity in perfect detail, books predicting the future except with a single error, etc. etc. etc.

To get more quantitative, let’s simply ask: how many different ways are there to fill the first page of a novel?  Let’s go ahead and assume that the page is filled with intelligible (or at least grammatical) English text, rather than arbitrary sequences of symbols, at a standard font size and page size.  In that case, using standard estimates for the entropy (i.e., compressibility) of English, I estimated this morning that there are maybe ~10700 possibilities.  So, forget about the rest of the novel: there are astronomically more possible first pages than could fit in the observable universe!

We could likewise ask: how many chess games could be played?  I’ve seen estimates from 1040 up to 10120, depending on whether we count only “sensible” games or also “absurd” ones (though in all cases, with a limit on the length of the game as might occur in a real competition). For Go, by contrast, which is played on a larger board (19×19 rather than 8×8) the estimates for the number of possible games seem to start at 10800 and only increase from there. This difference in magnitudes has something to do with why Go is a “harder” game than chess, why computers were able to beat the world chess champion already in 1997, but the world Go champion not until last year.

Or we could ask: given a thousand cities, how many routes are there for a salesman that visit each city exactly once? We write the answer as 1000!, pronounced “1000 factorial,” which just means 1000×999×998×…×2×1: there are 1000 choices for the first city, then 999 for the second city, 998 for the third, and so on.  This number is about 4×102567.  So again, more possible routes than atoms in the visible universe, yadda yadda.

But suppose the salesman is interested only in the shortest route that visits each city, given the distance between every city and every other.  We could then ask: to find that shortest route, would a computer need to search exhaustively through all 1000! possibilities—or, maybe not all 1000!, maybe it could be a bit more clever than that, but at any rate, a number that grew exponentially with the number of cities n?  Or could there be an algorithm that zeroed in on the shortest route dramatically faster: say, using a number of steps that grew only linearly or quadratically with the number of cities?

This, modulo a few details, is one of the most famous unsolved problems in all of math and science.  You may have heard of it; it’s called P vs. NP.  P (Polynomial-Time) is the class of problems that an ordinary digital computer can solve in a “reasonable” amount of time, where we define “reasonable” to mean, growing at most like the size of the problem (for example, the number of cities) raised to some fixed power.  NP (Nondeterministic Polynomial-Time) is the class for which a computer can at least recognize a solution in polynomial-time.  If P=NP, it would mean that for every combinatorial problem of this sort, for which a computer could recognize a valid solution—Sudoku puzzles, scheduling airline flights, fitting boxes into the trunk of a car, etc. etc.—there would be an algorithm that cut through the combinatorial explosion of possible solutions, and zeroed in on the best one.  If P≠NP, it would mean that at least some problems of this kind required astronomical time, regardless of how cleverly we programmed our computers.

Most of us believe that P≠NP—indeed, I like to say that if we were physicists, we would’ve simply declared P≠NP a “law of nature,” and given ourselves Nobel Prizes for the discovery of the law!  And if it turned out that P=NP, we’d just give ourselves more Nobel Prizes for the law’s overthrow.  But because we’re mathematicians and computer scientists, we call it a “conjecture.”

Another famous example of an NP problem is: I give you (say) a 2000-digit number, and I ask you to find its prime factors.  Multiplying two thousand-digit numbers is easy, at least for a computer, but factoring the product back into primes seems astronomically hard—at least, with our present-day computers running any known algorithm.  Why does anyone care?  Well, you might know that, any time you order something online—in fact, every time you see a little padlock icon in your web browser—your personal information, like (say) your credit card number, is being protected by a cryptographic code that depends on the belief that factoring huge numbers is hard, or a few closely-related beliefs.  If P=NP, then those beliefs would be false, and indeed all cryptography that depends on hard math problems would be breakable in “reasonable” amounts of time.

In the special case of factoring, though—and most of the other number theory problems that underlie modern cryptography—it wouldn’t even take anything as shocking as P=NP for them to fall.  Actually, that provides a good segue into another case where exponentials, and numbers vastly larger than 10122, regularly arise in the real world: quantum mechanics.

Some of you might have heard that quantum mechanics is complicated or hard.  But I can let you in on a secret, which is that it’s incredibly simple once you take the physics out of it!  Indeed, I think of quantum mechanics as not exactly even “physics,” but more like an operating system that the rest of physics runs on as application programs.  It’s a certain generalization of the rules of probability.  In one sentence, the central thing quantum mechanics says is that, to fully describe a physical system, you have to assign a number called an “amplitude” to every possible configuration that the system could be found in.  These amplitudes are used to calculate the probabilities that the system will be found in one configuration or another if you look at it.  But the amplitudes aren’t themselves probabilities: rather than just going from 0 to 1, they can be positive or negative or even complex numbers.

For us, the key point is that, if we have a system with (say) a thousand interacting particles, then the rules of quantum mechanics say we need at least 21000 amplitudes to describe it—which is way more than we could write down on pieces of paper filling the entire observable universe!  In some sense, chemists and physicists knew about this immensity since 1926.  But they knew it mainly as a practical problem: if you’re trying to simulate quantum mechanics on a conventional computer, then the resources needed to do so increase exponentially with the number of particles being simulated.  Only in the 1980s did a few physicists, such as Richard Feynman and David Deutsch, suggest “turning the lemon into lemonade,” and building computers that themselves would exploit the exponential growth of amplitudes.  Supposing we built such a computer, what would it be good for?  At the time, the only obvious application was simulating quantum mechanics itself!  And that’s probably still the most important application today.

In 1994, though, a guy named Peter Shor made a discovery that dramatically increased the level of interest in quantum computers.  That discovery was that a quantum computer, if built, could factor an n-digit number using a number of steps that grows only like about n2, rather than exponentially with n.  The upshot is that, if and when practical quantum computers are built, they’ll be able to break almost all the cryptography that’s currently used to secure the Internet.

(Right now, only small quantum computers have been built; the record for using Shor’s algorithm is still to factor 21 into 3×7 with high statistical confidence!  But Google is planning within the next year or so to build a chip with 49 quantum bits, or qubits, and other groups around the world are pursuing parallel efforts.  Almost certainly, 49 qubits still won’t be enough to do anything useful, including codebreaking, but it might be enough to do something classically hard, in the sense of taking at least ~249 or 563 trillion steps to simulate classically.)

I should stress, though, that for other NP problems—including breaking various other cryptographic codes, and solving the Traveling Salesman Problem, Sudoku, and the other combinatorial problems mentioned earlier—we don’t know any quantum algorithm analogous to Shor’s factoring algorithm.  For these problems, we generally think that a quantum computer could solve them in roughly the square root of the number of steps that would be needed classically, because of another famous quantum algorithm called Grover’s algorithm.  But getting an exponential quantum speedup for these problems would, at the least, require an additional breakthrough.  No one has proved that such a breakthrough in quantum algorithms is impossible: indeed, no one has proved that it’s impossible even for classical algorithms; that’s the P vs. NP question!  But most of us regard it as unlikely.

If we’re right, then the upshot is that quantum computers are not magic bullets: they might yield dramatic speedups for certain special problems (like factoring), but they won’t tame the curse of exponentiality, cut through to the optimal solution, every time we encounter a Library-of-Babel-like profusion of possibilities.  For (say) the Traveling Salesman Problem with a thousand cities, even a quantum computer—which is the most powerful kind of computer rooted in known laws of physics—might, for all we know, take longer than the age of the universe to find the shortest route.

The truth is, though, the biggest numbers that show up in math are way bigger than anything we’ve discussed until now: bigger than 10122, or even

$$ 2^{10^{122}}, $$

which is a rough estimate for the number of quantum-mechanical amplitudes needed to describe our observable universe.

For starters, there’s Skewes’ number, which the mathematician G. H. Hardy once called “the largest number which has ever served any definite purpose in mathematics.”  Let π(x) be the number of prime numbers up to x: for example, π(10)=4, since we have 2, 3, 5, and 7.  Then there’s a certain estimate for π(x) called li(x).  It’s known that li(x) overestimates π(x) for an enormous range of x’s (up to trillions and beyond)—but then at some point, it crosses over and starts underestimating π(x) (then overestimates again, then underestimates, and so on).  Skewes’ number is an upper bound on the location of the first such crossover point.  In 1955, Skewes proved that the first crossover must happen before

$$ x = 10^{10^{10^{964}}}. $$

Note that this bound has since been substantially improved, to 1.4×10316.  But no matter: there are numbers vastly bigger even than Skewes’ original estimate, which have since shown up in Ramsey theory and other parts of logic and combinatorics to take Skewes’ number’s place.

Alas, I won’t have time here to delve into specific (beautiful) examples of such numbers, such as Graham’s number.  So in lieu of that, let me just tell you about the sorts of processes, going far beyond exponentiation, that tend to yield such numbers.

The starting point is to remember a sequence of operations we all learn about in elementary school, and then ask why the sequence suddenly and inexplicably stops.

As long as we’re only talking about positive integers, “multiplication” just means “repeated addition.”  For example, 5×3 means 5 added to itself 3 times, or 5+5+5.

Likewise, “exponentiation” just means “repeated multiplication.”  For example, 53 means 5×5×5.

But what’s repeated exponentiation?  For that we introduce a new operation, which we call tetration, and write like so: 35 means 5 raised to itself 3 times, or

$$ 5^{5^5} = 5^{3125} \approx 1.9 \times 10^{2184}. $$

But we can keep going. Let x pentated to the y, or xPy, mean x tetrated to itself y times.  Let x sextated to the y, or xSy, mean x pentated to itself y times, and so on.

Then we can define the Ackermann function, invented by the mathematician Wilhelm Ackermann in 1928, which cuts across all these operations to get more rapid growth than we could with any one of them alone.  In terms of the operations above, we can give a slightly nonstandard, but perfectly serviceable, definition of the Ackermann function as follows:

A(1) is 1+1=2.

A(2) is 2×2=4.

A(3) is 3 to the 3rd power, or 33=27.

Not very impressive so far!  But wait…

A(4) is 4 tetrated to the 4, or

$$ ^{4}4 = 4^{4^{4^4}} = 4^{4^{256}} = BIG $$

A(5) is 5 pentated to the 5, which I won’t even try to simplify.  A(6) is 6 pentated to the 6.  And so on.

More than just a curiosity, the Ackermann function actually shows up sometimes in math and theoretical computer science.  For example, the inverse Ackermann function—a function α such that α(A(n))=n, which therefore grows as slowly as the Ackermann function grows quickly, and which is at most 4 for any n that would ever arise in the physical universe—sometimes appears in the running times of real-world algorithms.

In the meantime, though, the Ackermann function has a more immediate application.  Next time you find yourself in a biggest-number contest, like the one with which we opened this talk, you can just write A(1000), or even A(A(1000)) (after specifying that A means the Ackermann function above).  You’ll win—period—unless your opponent has also heard of something Ackermann-like or beyond.

OK, but Ackermann is very far from the end of the story.  If we want to go incomprehensibly beyond it, the starting point is the so-called “Berry Paradox”, which was first described by Bertrand Russell, though he said he learned it from a librarian named Berry.  The Berry Paradox asks us to imagine leaping past exponentials, the Ackermann function, and every other particular system for naming huge numbers, and just going straight for a single gambit that seems to beat everything else:

The biggest number that can be specified using a hundred English words or fewer

Why is this called a paradox?  Well, do any of you see the problem?

Right: if the above made sense, then we could just as well talk about

Twice the biggest number that can be specified using a hundred English words or fewer

But we just specified that number—one that, by definition, takes more than a hundred words to specify—using far fewer than a hundred words!  Whoa.  What gives?

Most logicians would say the resolution of the paradox is simply that the concept of “specifying a number with English words” isn’t precisely defined, so that phrases like the ones above don’t actually name definite numbers.  And how do we know that the concept isn’t precisely defined?  Because, if it was, then it would lead to paradoxes like the Berry Paradox!

This means that, if we want to escape the jaws of logical contradiction, then in this gambit, we ought to replace English by a clear, logical language: one that can be used to specify numbers in an unambiguous way.  Like … oh, I know!

The biggest number that can be specified using a computer program that’s at most 1000 bytes long

To make this work, there are just two issues we need to get out of the way.  First, what does it mean to “specify” a number using a computer program?  There are different things it could mean, but for concreteness, let’s say a computer program specifies a number N if, when you run it (with no input), the program runs for exactly N steps and then stops.  A program that runs forever doesn’t specify any number.

The second issue is, which programming language do we have in mind: BASIC? C? Python?  The answer is that it won’t much matter!  The Church-Turing Thesis, one of the foundational ideas of computer science, implies that every “reasonable” programming language can emulate every other one.  So the story here can be repeated with just about any programming language of your choice.  For concreteness, though, we’ll pick one of the first and simplest programming languages, namely “Turing machine”—the language invented by Alan Turing all the way back in 1936!

In the Turing machine language, we imagine a one-dimensional tape divided into squares, extending infinitely in both directions, and with all squares initially containing a “0.”  There’s also a tape head with n “internal states,” moving back and forth on the tape.  Each internal state contains an instruction, and the only allowed instructions are: write a “0” in the current square, write a “1” in the current square, move one square left on the tape, move one square right on the tape, jump to a different internal state, halt, and do any of the previous conditional on whether the current square contains a “0” or a “1.”

Using Turing machines, in 1962 the mathematician Tibor Radó defined the so-called Busy Beaver function, or BB(n), which allowed naming by far the largest numbers anyone had yet named.  BB(n) as follows: consider all Turing machines with n internal states.  Some of those machines run forever, when started on an all-0 input tape.  Discard them.  Among the ones that eventually halt, there must be some machine that runs for a maximum number of steps before halting.  However many steps that is, that’s what we call BB(n), the nth Busy Beaver number.

The first few values of the Busy Beaver function have actually been calculated, so let’s see them.

BB(1) is 1.  For a 1-state Turing machine on an all-0 tape, the choices are limited: either you halt in the very first step, or else you run forever.

BB(2) is 6, as isn’t too hard to verify by trying things out with pen and paper.

BB(3) is 21: that determination was already a research paper.

BB(4) is 107 (another research paper).

Much like with the Ackermann function, not very impressive yet!  But wait:

BB(5) is not yet known, but it’s known to be at least 47,176,870.

BB(6) is at least 7.4×1036,534.

BB(7) is at least

$$ 10^{10^{10^{10^{18,000,000}}}}. $$

Clearly we’re dealing with a monster here, but can we understand just how terrifying of a monster?  Well, call a sequence f(1), f(2), … computable, if there’s some computer program that takes n as input, runs for a finite time, then halts with f(n) as its output.  To illustrate, f(n)=n2, f(n)=2n, and even the Ackermann function that we saw before are all computable.

But I claim that the Busy Beaver function grows faster than any computable function.  Since this talk should have at least some math in it, let’s see a proof of that claim.

Maybe the nicest way to see it is this: suppose, to the contrary, that there were a computable function f that grew at least as fast as the Busy Beaver function.  Then by using that f, we could take the Berry Paradox from before, and turn it into an actual contradiction in mathematics!  So for example, suppose the program to compute f were a thousand bytes long.  Then we could write another program, not much longer than a thousand bytes, to run for (say) 2×f(1000000) steps: that program would just need to include a subroutine for f, plus a little extra code to feed that subroutine the input 1000000, and then to run for 2×f(1000000) steps.  But by assumption, f(1000000) is at least the maximum number of steps that any program up to a million bytes long can run for—even though we just wrote a program, less than a million bytes long, that ran for more steps!  This gives us our contradiction.  The only possible conclusion is that the function f, and the program to compute it, couldn’t have existed in the first place.

(As an alternative, rather than arguing by contradiction, one could simply start with any computable function f, and then build programs that compute f(n) for various “hardwired” values of n, in order to show that BB(n) must grow at least as rapidly as f(n).  Or, for yet a third proof, one can argue that, if any upper bound on the BB function were computable, then one could use that to solve the halting problem, which Turing famously showed to be uncomputable in 1936.)

In some sense, it’s not so surprising that the BB function should grow uncomputably quickly—because if it were computable, then huge swathes of mathematical truth would be laid bare to us.  For example, suppose we wanted to know the truth or falsehood of the Goldbach Conjecture, which says that every even number 4 or greater can be written as a sum of two primes numbers.  Then we’d just need to write a program that checked each even number one by one, and halted if and only if it found one that wasn’t a sum of two primes.  Suppose that program corresponded to a Turing machine with N states.  Then by definition, if it halted at all, it would have to halt after at most BB(N) steps.  But that means that, if we knew BB(N)—or even any upper bound on BB(N)—then we could find out whether our program halts, by simply running it for the requisite number of steps and seeing.  In that way we’d learn the truth or falsehood of Goldbach’s Conjecture—and similarly for the Riemann Hypothesis, and every other famous unproved mathematical conjecture (there are a lot of them) that can be phrased in terms of a computer program never halting.

OK, you wanna know something else wild about the Busy Beaver function?  In 2015, my former student Adam Yedidia and I wrote a paper where we proved that BB(8000)—i.e., the 8000th Busy Beaver number—can’t be determined using the usual axioms for mathematics, which are called Zermelo-Fraenkel (ZF) set theory.  Nor can B(8001) or any larger Busy Beaver number.

To be sure, BB(8000) has some definite value: there are finitely many 8000-state Turing machines, and each one either halts or runs forever, and among the ones that halt, there’s some maximum number of steps that any of them runs for.  What we showed is that math, if it limits itself to the currently-accepted axioms, can never prove the value of BB(8000), even in principle.

The way we did that was by explicitly constructing an 8000-state Turing machine, which (in effect) enumerates all the consequences of the ZF axioms one after the next, and halts if and only if it ever finds a contradiction—that is, a proof of 0=1.  Presumably set theory is actually consistent, and therefore our program runs forever.  But if you proved the program ran forever, you’d also be proving the consistency of set theory.  And has anyone heard of any obstacle to doing that?  Of course, Gödel’s Incompleteness Theorem!  Because of Gödel, if set theory is consistent (well, technically, also arithmetically sound), then it can’t prove our program either halts or runs forever.  But that means set theory can’t determine BB(8000) either—because if it could do that, then it could also determine the behavior of our program.

To be clear, it was long understood that there’s some computer program that halts if and only if set theory is inconsistent—and therefore, that the axioms of set theory can determine at most k values of the Busy Beaver function, for some positive integer k.  “All” Adam and I did was to prove the first explicit upper bound, k≤8000, which required a lot of optimizations and software engineering to get the number of states down to something reasonable (our initial estimate was more like k≤1,000,000).  More recently, Stefan O’Rear has improved our bound—most recently, he says, to k≤1000, meaning that, at least by the lights of ZF set theory, fewer than a thousand values of the BB function can ever be known.

Meanwhile, let me remind you that, at present, only four values of the function are known!  Could the value of BB(100) already be independent of set theory?  What about BB(10)?  BB(5)?  Just how early in the sequence do you leap off into Platonic hyperspace?  I don’t know the answer to that question but would love to.

Ah, you ask, but is there any number sequence that grows so fast, it blows even the Busy Beavers out of the water?  There is!

Imagine a magic box into which you could feed in any positive integer n, and it would instantly spit out BB(n), the nth Busy Beaver number.  Computer scientists call such a box an “oracle.”  Even though the BB function is uncomputable, it still makes mathematical sense to imagine a Turing machine that’s enhanced by the magical ability to access a BB oracle any time it wants: call this a “super Turing machine.”  Then let SBB(n), or the nth super Busy Beaver number, be the maximum number of steps that any n-state super Turing machine makes before halting, if given no input.

By simply repeating the reasoning for the ordinary BB function, one can show that, not only does SBB(n) grow faster than any computable function, it grows faster than any function computable by super Turing machines (for example, BB(n), BB(BB(n)), etc).

Let a super duper Turing machine be a Turing machine with access to an oracle for the super Busy Beaver numbers.  Then you can use super duper Turing machines to define a super duper Busy Beaver function, which you can use in turn to define super duper pooper Turing machines, and so on!

Let “level-1 BB” be the ordinary BB function, let “level-2 BB” be the super BB function, let “level 3 BB” be the super duper BB function, and so on.  Then clearly we can go “level-k BB,” for any positive integer k.

But we need not stop even there!  We can then go to level-ω BB.  What’s ω?  Mathematicians would say it’s the “first infinite ordinal”—the ordinals being a system where you can pass from any set of numbers you can possibly name (even an infinite set), to the next number larger than all of them.  More concretely, the level-ω Busy Beaver function is simply the Busy Beaver function for Turing machines that are able, whenever they want, to call an oracle to compute the level-k Busy Beaver function, for any positive integer k of their choice.

But why stop there?  We can then go to level-(ω+1) BB, which is just the Busy Beaver function for Turing machines that are able to call the level-ω Busy Beaver function as an oracle.  And thence to level-(ω+2) BB, level-(ω+3) BB, etc., defined analogously.  But then we can transcend that entire sequence and go to level-2ω BB, which involves Turing machines that can call level-(ω+k) BB as an oracle for any positive integer k.  In the same way, we can pass to level-3ω BB, level-4ω BB, etc., until we transcend that entire sequence and pass to level-ω2 BB, which can call any of the previous ones as oracles.  Then we have level-ω3 BB, level-ω4 BB, etc., until we transcend that whole sequence with level-ωω BB.  But we’re still not done!  For why not pass to level

$$ \omega^{\omega^{\omega}} $$,


$$ \omega^{\omega^{\omega^{\omega}}} $$,

etc., until we reach level

$$ \left. \omega^{\omega^{\omega^{.^{.^{.}}}}}\right\} _{\omega\text{ times}} $$?

(This last ordinal is also called ε0.)  And mathematicians know how to keep going even to way, way bigger ordinals than ε0, which give rise to ever more rapidly-growing Busy Beaver sequences.  Ordinals achieve something that on its face seems paradoxical, which is to systematize the concept of transcendence.

So then just how far can you push this?  Alas, ultimately the answer depends on which axioms you assume for mathematics.  The issue is this: once you get to sufficiently enormous ordinals, you need some systematic way to specify them, say by using computer programs.  But then the question becomes which ordinals you can “prove to exist,” by giving a computer program together with a proof that the program does what it’s supposed to do.  The more powerful the axiom system, the bigger the ordinals you can prove to exist in this way—but every axiom system will run out of gas at some point, only to be transcended, in Gödelian fashion, by a yet more powerful system that can name yet larger ordinals.

So for example, if we use Peano arithmetic—invented by the Italian mathematician Giuseppe Peano—then Gentzen proved in the 1930s that we can name any ordinals below ε0, but not ε0 itself or anything beyond it.  If we use ZF set theory, then we can name vastly bigger ordinals, but once again we’ll eventually run out of steam.

(Technical remark: some people have claimed that we can transcend this entire process by passing from first-order to second-order logic.  But I fundamentally disagree, because with second-order logic, which number you’ve named could depend on the model of set theory, and therefore be impossible to pin down.  With the ordinal Busy Beaver numbers, by contrast, the number you’ve named might be breathtakingly hopeless ever to compute—but provided the notations have been fixed, and the ordinals you refer to actually exist, at least we know there is a unique positive integer that you’re talking about.)

Anyway, the upshot of all of this is that, if you try to hold a name-the-biggest-number contest between two actual professionals who are trying to win, it will (alas) degenerate into an argument about the axioms of set theory.  For the stronger the set theory you’re allowed to assume consistent, the bigger the ordinals you can name, therefore the faster-growing the BB functions you can define, therefore the bigger the actual numbers.

So, yes, in the end the biggest-number contest just becomes another Gödelian morass, but one can get surprisingly far before that happens.

In the meantime, our universe seems to limit us to at most 10122 choices that could ever be made, or experiences that could ever be had, by any one observer.  Or fewer, if you believe that you won’t live until the heat death of the universe in some post-Singularity computer cloud, but for at most about 102 years.  In the meantime, the survival of the human race might hinge on people’s ability to understand much smaller numbers than 10122: for example, a billion, a trillion, and other numbers that characterize the exponential growth of our civilization and the limits that we’re now running up against.

On a happier note, though, if our goal is to make math engaging to young people, or to build bridges between the quantitative and literary worlds, the way this festival is doing, it seems to me that it wouldn’t hurt to let people know about the vastness that’s out there.  Thanks for your attention.

Read the whole story
5 days ago
Share this story

The $110 Billion Arms Deal to Saudi Arabia is Fake News


Editor's note: This post originally appeared on Markaz.


Last month, President Trump visited Saudi Arabia and his administration announced that he had concluded a $110 billion arms deal with the kingdom. Only problem is that there is no deal. It’s fake news.

I’ve spoken to contacts in the defense business and on the Hill, and all of them say the same thing: There is no $110 billion deal. Instead, there are a bunch of letters of interest or intent, but not contracts. Many are offers that the defense industry thinks the Saudis will be interested in someday. So far nothing has been notified to the Senate for review. The Defense Security Cooperation Agency, the arms sales wing of the Pentagon, calls them “intended sales.” None of the deals identified so far are new, all began in the Obama administration.

An example is a proposal for sale of four frigates (called multi-mission surface combatant vessels) to the Royal Saudi navy. This proposal was first reported by the State Department in 2015. No contract has followed. The type of frigate is a derivative of a vessel that the U.S. Navy uses but the derivative doesn’t actually exist yet. Another piece is the Terminal High Altitude Air Defense system (THAAD) which was recently deployed in South Korea. The Saudis have expressed interest in the system for several years but no contracts have been finalized. Obama approved the sale in principle at a summit at Camp David in 2015. Also on the wish list are 150 Black Hawk helicopters. Again, this is old news repackaged. What the Saudis and the administration did is put together a notional package of the Saudi wish list of possible deals and portray that as a deal. Even then the numbers don’t add up. It’s fake news.

Moreover, it’s unlikely that the Saudis could pay for a $110 billion deal any longer, due to low oil prices and the two-plus years old war in Yemen. President Obama sold the kingdom $112 billion in weapons over eight years, most of which was a single, huge deal in 2012 negotiated by then-Secretary of Defense Bob Gates. To get that deal through Congressional approval, Gates also negotiated a deal with Israel to compensate the Israelis and preserve their qualitative edge over their Arab neighbors. With the fall in oil prices, the Saudis have struggled to meet their payments since.

You will know the Trump deal is real when Israel begins to ask for a package to keep the Israeli Defense Forces’ qualitative edge preserved. What is coming soon is a billion-dollars deal for more munitions for the war in Yemen. The Royal Saudi Air Force needs more munitions to continue the air bombardment of the Arab world’s poorest country.

Finally, just as the arms deal is not what it was advertised, so is the much-hyped united Muslim campaign against terrorism. Instead, the Gulf states have turned on one of their own. Saudi Arabia has orchestrated a campaign to isolate Qatar. This weekend Saudi Arabia, the UAE, Bahrain, and Egypt broke relations with Qatar. Saudi allies like the Maldives and Yemen jumped on the bandwagon. Saudi Arabia has closed its land border with Qatar.

This is not the first such spat but it may be the most dangerous. The Saudis and their allies are eager to punish Qatar for supporting the Muslim Brotherhood, for hosting Al-Jazeera, and keeping ties with Iran. Rather than a united front to contain Iran, the Riyadh summit’s outcome is exacerbating sectarian and political tensions in the region.

Read the whole story
105 days ago
Share this story

Book Review: The Hungry Brain

1 Comment and 5 Shares

[Content note: food, dieting, obesity]


The Hungry Brain gives off a bit of a Malcolm Gladwell vibe, with its cutesy name and pop-neuroscience style. But don’t be fooled. Stephan Guyenet is no Gladwell-style dilettante. He’s a neuroscientist studying nutrition, with a side job as a nutrition consultant, who spends his spare time blogging about nutrition, tweeting about nutrition, and speaking at nutrition-related conferences. He is very serious about what he does and his book is exactly as good as I would have hoped. Not only does it provide the best introduction to nutrition I’ve ever seen, but it incidentally explains other neuroscience topics better than the books directly about them do.

I first learned about Guyenet’s work from his various debates with Gary Taubes and his supporters, where he usually represents the “establishment” side. He is very careful to emphasize that the establishment doesn’t look anything like Taubes’ caricature of it. The establishment doesn’t believe that obesity is just about weak-willed people voluntarily choosing to eat too much, or that obese people would get thin if they just tried diet and exercise, or that all calories are the same. He writes

The [calories in, calories out] model is the idea that our body weight is determined by voluntary decisions about how much we eat and move, and in order to control our body weight, all we need is a little advice about how many calories to eat and burn, and a little willpower. The primary defining feature of this model is that it assumes that food intake and body fatness are not regulated. This model seems to exist mostly to make lean people feel smug, since it attributes their leanness entirely to wise voluntary decisions and a strong character. I think at this point, few people in the research world believe the CICO model.

[Debate opponent Dr. David] Ludwig and I both agree that it provides a poor fit for the evidence. As an alternative, Ludwig proposes the insulin model, which states that the primary cause of obesity is excessive insulin action on fat cells, which in turn is caused principally by rapidly-digesting carbohydrate. According to this model, too much insulin reduces blood levels of glucose and fatty acids (the two primary circulating metabolic fuels), simultaneously leading to hunger, fatigue, and fat gain. Overeating is caused by a kind of “internal starvation”. There are other versions of the insulin model, but this is the one advocated by Ludwig (and Taubes), so it will be my focus.

But there’s a third model, not mentioned by Ludwig or Taubes, which is the one that predominates in my field. It acknowledges the fact that body weight is regulated, but the regulation happens in the brain, in response to signals from the body that indicate its energy status. Chief among these signals is the hormone leptin, but many others play a role (insulin, ghrelin, glucagon, CCK, GLP-1, glucose, amino acids, etc.)

The Hungry Brain is part of Guyenet’s attempt to explain this third model, and it basically succeeds. But like many “third way” style proposals, it leaves a lot of ambiguity. With CICO, at least you know where you stand – confident that everything is based on willpower and that you can ignore biology completely. And again, with Taubes, you know where you stand – confident that willpower is useless and that low-carb diets will solve everything. The Hungry Brain is a little more complicated, a little harder to get a read on, and at times pretty wishy-washy.

But listening to people’s confidently-asserted simple and elegant ideas was how we got into this mess, so whatever, let’s keep reading.


The Hungry Brain begins with the typical ritual invocation of the obesity epidemic. Did you know there are entire premodern cultures where literally nobody is obese? That in the 1800s, only 5% of Americans were? That the prevalence of obesity has doubled since 1980?

Researchers have been keeping records of how much people eat for a long time, and increased food intake since 1980 perfectly explains increased obesity since 1980 – there is no need to bring in decreased exercise or any other factors. Exercise has decreased since the times when we were all tilling fields ten hours a day, but for most of history, as our exercise decreased, our food intake decreased as well. But for some reason, starting around 1980, the two factors uncoupled, and food intake started to rise despite exercise continuing to decrease.

Guyenet discusses many different reasons this might have happened, including stress-related overeating, poor sleep, and quick prepackaged food. But the ideas he keeps coming back to again and again are food reward and satiety.

In the 1970s, scientists wanted to develop new rat models of obesity. This was harder than it sounded; rats ate only as much as they needed and never got fat. Various groups tried to design various new forms of rat chow with extra fat, extra sugar, et cetera, with only moderate success – sometimes they could get the rats to eat a little too much and gradually become sort of obese, but it was a hard process. Then, almost by accident, someone tried feeding the rats human snack food, and they ballooned up to be as fat as, well, humans. The book:

Palatable human food is the most effective way to cause a normal rat to spontaneously overeat and become obese, and its fattening effect canot be attributed solely to its fat or sugar content.

So what does cause this fattening effect? I think the book’s answer is “no single factor, but that doesn’t matter, because capitalism is an optimization process that designs foods to be as rewarding as possible, so however many different factors there are, every single one of them will be present in your bag of Doritos”. But to be more scientific about it, the specific things involved are some combination of sweet/salty/umami tastes, certain ratios of fat and sugar, and reinforced preferences for certain flavors.

Modern food isn’t just unusually rewarding, it’s also unusually bad at making us full. The brain has some pretty sophisticated mechanisms to determine when we’ve eaten enough; these usually involve estimating food’s calorie load from its mass and fiber level. But modern food is calorically dense – it contains many more calories than predicted per unit mass – and fiber-poor. This fools the brain into thinking that we’re eating less than we really are, and shuts down the system that would normally make us feel full once we’ve had enough. Simultaneously, the extremely high level of food reward tricks the brain into thinking that this food is especially nutritionally valuable and that it should relax its normal constraints.

Adding to all of this is the so-called “buffet effect”, where people will eat more calories from a variety of foods presented together than they would from any single food alone. My mother likes to talk about her “extra dessert stomach”, ie the thing where you can gorge yourself on a burger and fries until you’re totally full and couldn’t eat another bite – but then mysteriously find room for an ice cream sundae afterwards. This is apparently a real thing that’s been confirmed in scientific experiments, and a major difference between us and our ancestors. The !Kung Bushmen, everyone’s go-to example of an all-natural hunter-gatherer tribe, apparently get 50% of their calories from a single food, the mongongo nut, and another 40% from meat. Meanwhile, we design our meals to include as many unlike foods as possible – for example, a burger with fries, soda, and a milkshake for dessert. This once again causes the brain to relax its usual strict constraints on appetite and let us eat more than we should.

The book sums all of these things up into the idea of “food reward” making some foods “hyperpalatable” and “seducing” the reward mechanism in order to produce a sort of “food addiction” that leads to “cravings”, the “obesity epidemic”, and a profusion of “scare quotes”.

I’m being a little bit harsh here, but only to highlight a fundamental question. Guyenet goes into brilliant detail about things like the exact way the ventral tegmental area of the brain responds to food-related signals, but in the end, it all sounds suspiciously like “food tastes good so we eat a lot of it”. It’s hard to see where exactly this differs from the paradigm that he dismisses as “attributing leanness to wise voluntary decisions and a strong character…to make lean people feel smug”. Yes, food tastes good so we eat a lot of it. And Reddit is fun to read, but if someone browses Reddit ten hours a day and doesn’t do any work then we start speculating about their character, and maybe even feeling smug. This part of the book, taken alone, doesn’t really explain why we shouldn’t be doing that about weight too.


The average person needs about 800,000 calories per year. And it takes about 3,500 extra calories to gain a pound of weight. So if somebody stays about the same weight for a year, it means they fulfilled their 800,000 calorie requirement to within a tolerance of 3,500 calories, ie they were able to match their food intake to their caloric needs with 99.5% accuracy.

By this measure, even people who gain five or ten pounds a year are doing remarkably well, falling short of perfection by only a few percent. It’s not quite true that someone who gains five pounds is ((5*3,500)/800,000) = 98% accurate, because each pound you gain increases caloric requirements in a negative feedback loop, but it’s somewhere along those lines.

Take a second to think about that. Can you, armed with your FitBit and nutritional labeling information, accurately calculate how many calories you burn in a given day, and decide what amount of food you need to eat to compensate for it, within 10%? I think even the most obsessive personal trainer would consider that a tall order. But even the worst overeaters are subconsciously managing that all the time. However many double bacon cheeseburgers they appear to be eating in a single sitting, over the long term their body is going to do some kind of magic to get them to within a few percent of the calorie intake they need.

It’s not surprising that people overeat, it’s surprising that people don’t overeat much more. Consider someone who just has bad impulse control and so eats whatever they see – wouldn’t we expect them to deviate from ideal calorie input by more than a few percent, given that this person probably has no idea what their ideal input even is and maybe has never heard of calories?

And as impressive as modern Westerners’ caloric balance is, everyone else’s is even better. Guyenet discusses the Melanesian island of Kitava, where there is literally only one fat person on the entire island – a businessman who spends most of his time in modern urbanized New Guinea, eating Western food. The Kitavans have enough food, and they live a relaxed tropical lifestyle that doesn’t demand excessive exercise. But their bodies aren’t making even the 10% error that ours do. They’re essentially perfect. So are the !Kung with their mongongo nuts, Inuit with their blubber, et cetera.

And so are Westerners who limit themselves to bland food. In 1965, some scientists locked people in a room where they could only eat nutrient sludge dispensed from a machine. Even though the volunteers had no idea how many calories the nutrient sludge was, they ate exactly enough to maintain their normal weight, proving the existence of a “sixth sense” for food caloric content. Next, they locked morbidly obese people in the same room. They ended up eating only tiny amounts of the nutrient sludge, one or two hundred calories a day, without feeling any hunger. This proved that their bodies “wanted” to lose the excess weight and preferred to simply live off stored fat once removed from the overly-rewarding food environment. After six months on the sludge, a man who weighed 400 lbs at the start of the experiment was down to 200, without consciously trying to reduce his weight.

In a similar experiment going the opposite direction, Ethan Sims got normal-weight prison inmates to eat extraordinary amounts of food – yet most of them still had trouble gaining weight. He had to dial their caloric intake up to 10,000/day – four times more than a normal person – before he was able to successfully make them obese. And after the experiment, he noted that “most of them hardly had any appetite for weeks afterward, and the majority slimmed back down to their normal weight”.

What is going on here? Like so many questions, this one can best be solved by grotesque Frankenstein-style suturing together of the bodies of living creatures.

In the 1940s, scientists discovered that if they damaged the ventromedial hypothalamic nucleus (VMN) of rats, the rats would basically never stop eating, becoming grotesquely obese. They theorized that the VMN was a “satiety center” that gave rats the ability to feel full; without it, they would feel hungry forever. Later on, a strain of mutant rats was discovered that seemed to naturally have the same sort of issue, despite seemingly intact hypothalami. Scientists wondered if there might be a hormonal problem, and so they artificially conjoined these rats to healthy normal rats, sewing together their circulatory systems into a single network. The result: when a VMN-lesioned rat was joined to a normal rat, the VMN-lesioned rat stayed the same, but the normal rat stopped eating and starved to death. When a mutant rat was joined to a normal rat, the normal rat stayed the same and the mutant rat recovered and became normal weight.

The theory they came up with to explain the results was this: fat must produce some kind of satiety hormone, saying “Look, you already have a lot of fat, you can feel full and stop eating now”. The VMN of the hypothalamus must detect this message and tell the brain to feel full and stop eating. So the VMN-lesioned rats, whose detector was mostly damaged, responded by never feeling full, eating more and more food, and secreting more and more (useless) satiety hormone. When they were joined to normal rats, their glut of satiety hormones flooded the normal rats – and their normal brain, suddenly bombarded with “YOU ALREADY HAVE WAY TOO MUCH FAT” messages, stopped eating entirely.

The mutant rats, on the other hand, had lost the ability to produce the satiety hormone. They, too, felt hungry all the time and ate everything. But when they were joined to a normal rat, the normal levels of satiety hormone flowed from the normal rat into the mutant rat, reached the fully-functional detector in their brains, and made them feel full, curing their obesity.

Skip over a lot of scientific infighting and unfortunate priority disputes and patent battles, and it turns out the satiety hormone is real, exists in humans as well, and is called leptin. A few scientists managed to track down some cases of genetic leptin deficiency in humans, our equivalent of the mutant rats, and, well…

Usually they are of normal birth weight and then they’re very, very hungry from the first weeks and months of life. By age one, they have obesity. By age two, they weigh 55-65 pounds, and their obesity only accelerates from there. While a normal child may be about 25% fat, and a typical child with obesity may be 40% fat, leptin-deficient children are up to 60% fat. Farooqi explains that the primary reason letpin-deficient children develop obesity is that they have “an incredible drive to eat”…leptin-deficient children are nearly always hungry, and they almost always want to eat, even shortly after meals. Their appetite is so exaggerated that it’s almost impossible to put them on a diet: if their food is restricted, they find some way to eat, including retrieving stale morsels from the trash can and gnawing on fish sticks directly from the freezer. This is the desperation of starvation […]

Unlike normal teenagers, those with leptin deficiency don’t have much interest in films, dating, or other teenage pursuits. They want to talk about food, about recipes. “Everything they do, think about, talk about, has to do with food” says [Dr.] Farooqi. This shows that the [leptin system] does much more than simply regulate appetite – it’s so deeply rooted in the brain that it has the ability to hijack a broad swath of brain functions, including emotions and cognition.

It’s the leptin-VNM-feedback system (dubbed the “lipostat”) that helps people match their caloric intake to their caloric requirements so impressively. The lipostat is what keeps hunter-gatherers eating exactly the right number of mongongo nuts, and what keeps modern Western overeaters at much closer to the right weight than they could otherwise expect.

The lipostat-brain interface doesn’t just control the raw feeling of hunger, it seems to have a wide range of food-related effects, including some on higher cognition. Ancel Keys (of getting-blamed-for-everything fame) ran the Minnesota Starvation Experiment on some unlucky conscientious objectors to World War II. He starved them until they lost 25% of their body weight, and found that:

Over the course of their weight loss, Keys’s subjects developed a remarkable obsession with food. In addition to their inescapable, gnawing hunger, their conversations, thoughts, fantasies, and dreams revolved around food and eating – part of a phenomenon Keys called “semistarvation neurosis”. They became fascinated by recipes and cookbooks, and some even began collecting cooking utensils. Like leptin-deficient adolescents, their lives revolved around food. Also like leptin-deficient adolescents, they had very low leptin levels due to their semi-starved state.

Unsurprisingly, as soon as the experiment ended, they gorged themselves until they were right back at their pre-experiment weights (but no higher), at which point they lost their weird food obsession.

Just as a well-functioning lipostat is very good at keeping people normal weight, a malfunctioning lipostat is very good at keeping people obese. Fat people seem to have “leptin resistance”, sort of like the VMN-lesioned rats, so that their bodies get confused about how much fat they have. Suppose a healthy person weighs 150 lbs, his body is on board with that, and his lipostat is set to defend a 150 lb set point. Then for some reason he becomes leptin-resistant, so that the brain is only half as good at detecting leptin as it should be. Now he will have to be 300 lbs before his brain “believes” he is the right weight and stops encouraging him to eat more. If he goes down to a “mere” 280 lbs, then he will go into the same semistarvation neurosis as Ancel Keys’ experimental subjects and become desperately obsessed with food until they get back up to 300 again. Or his body will just slow down metabolism until his diet brings him back up. Or any of a bunch of other ways the lipostat has to restore weight when it wants to.

(and I would be shocked if the opposite problem weren’t at least part of anorexia)

This explains the well-known phenomenon where contestants on The Biggest Loser who lose 200 or 300 pounds for the television camera pretty much always gain it back after the show ends. Even though they’re highly motivated and starting from a good place, their lipostat has seized on their previous weight as the set point it wants to defend, and resisting the lipostat is somewhere between hard and impossible. As far as I know, nobody has taken Amptoons up on their challenge to find a single peer-reviewed study showing any diet that can consistently bring fat people to normal weight and keep them there.

And alas, it doesn’t seem to work to just inject leptin directly. As per Guyenet

People with garden variety obesity already have high levels of leptin…while leptin therapy does cause some amount of fat loss, it requires enormous doses to be effective – up to forty times the normal circulating amount. Also troubling is the extremely variable response, with some people losing over thirty pounds and others losing little or no weight. This is a far cry from the powerful fat-busting effect of leptin in rodents. [Leptin as] the new miracle weight-loss drug never made it to market.

This disappointment forced the academic and pharmaceutical communities to confront a distressing possibility: the leptin system defends vigorously against weight loss, but not so vigorously against weight gain. “I have always thought, and continue to believe,” explained [leptin expert Rudy] Leibel, “that the leptin hormone is really a mechanism for detecting deficiency, not excess.” It’s not designed to constrain body fatness, perhaps because being too fat is rarely a problem in the wild. Many researchers now believe that while low leptin levels in humans engage a powerful starvation response that promotes fat gain, high leptin levels don’t engage an equally powerful response that promotes fat loss.

Yet something seems to oppose rapid fat gain, as Ethan Sims’ overfeeding studies (and others) have shown. Although leptin clearly defends the lower limit of adiposity, the upper limit may be defended by an additional, unidentified factor – in some people more than others.

This is the other half of the uncomfortable dichotomy that makes me characterize The Hungry Brain as “wishy-washy”. The lipostat is a powerful and essentially involuntary mechanism for getting weight exactly where the brain wants, whether individual dieters are cooperative or not. Here it looks like obesity is nobody’s fault, unrelated to voluntary decisions, and that the standard paradigm really is just an attempt by lean people to feel smug. Practical diet advice starts to look like “inject yourself with quantities of leptin so massive that they overcome your body’s resistance”. How do we connect this with the other half of the book, the half with food reward and satiety and all that?


With more rat studies!

Dr. Barry Levin fed rats either a healthy-rat-food diet or a hyperpalatable-human-food diet, then starved and overfed them in various ways. He found that the rats defended their obesity set points in the expected manner, but that the same rats defend different set points depending on their diets. Rats on healthy-rat-food defended a low, healthy-for-rats set point; rats on hyperpalatable-human-food defended a higher set point that kept them obese.

That is, suppose you give a rat as much Standardized Food Product as it can eat. It eats until it weighs 8 ounces, and stays that weight for a while. Then you starve it until it only weighs 6 ounces, and it’s pretty upset. Then you let it eat as much as it wants again, and it overeats until it gets back to 8 ounces, then eats normally and maintains that weight.

But suppose you get a rat as many Oreos as it can eat. It eats until it weighs 16 ounces, and stays that weight for a while. Then you starve it until it only weighs 6 ounces. Then you let it eat as much as it wants again, and this time it overeats until it gets back to 16 ounces, and eats normally to maintain that weight.

Something similar seems to happen with humans. A guy named Michel Cabanac ran an experiment in which he put overweight people on two diets. In the first diet, they ate Standardized Food Product, and naturally lost weight since it wasn’t very good and they didn’t eat very much of it. In the second diet, he urged people to eat less until they matched the first group’s weight loss, but to keep eating the same foods as normal – just less of them. The second group reported being hungry and having a lot of trouble dieting; the first group reported not being hungry and not having any trouble at all.

Guyenet concludes:

Calorie-dense, highly rewarding food may favor overeating and weight gain not just because we passively overeat it but also because it turns up the set point of the lipostat. This may be one reason why regularly eating junk food seems to be a fast track to obesity in both animals and humans…focusing the diet on less rewarding foods may make it easier to lose weight and maintain weight loss because the lipostat doesn’t fight it as vigorously. This may be part of the explanation for why all weight-loss diets seem to work to some extent – even those that are based on diametrically opposed principles, such as low-fat, low-carbohydrate, paleo, and vegan diets. Because each diet excludes major reward factors, they may all lower the adiposity set point somewhat.

(this reminds me of the Shangri-La Diet, where people would drink two tablespoons of olive oil in the morning, then find it was easy to diet without getting hungry during the day. People wondered whether maybe the tastelessness of the olive oil had something to do with it. Could it be that the olive oil is temporarily bringing the lipostat down to its “bland food” level?)

Why should some food make the lipostat work better than other food? Guyenet now gets to some of his own research, which is on a type of brain cell called a POMC neuron. These neurons produce various chemicals, including a sort of anti-leptin called Neuropeptide Y, and they seem to be a very fundamental part of the lipostat and hunger system. In fact, if you use superprecise chemical techniques to kill NPY neurons but nothing else, you can cure obesity in rats.

The area of the hypothalamus with POMC neurons seem to be damaged in overweight rats and overweight humans. Microglia and astrocytes, the brain’s damage-management and repair cells, proliferated in appetite-related centers, but nowhere else. Maybe this literal damage corresponds to the metaphorically “damaged” lipostat that’s failing to maintain weight normally, or the “damaged” leptin detector that seems to be misinterpreting the body’s obesity?

In any case, eating normal rat food for long enough appears to heal this damage:

Our results suggest that obese rodents suffer from a mild form of brain injury in an area of the brain that’s critical for regulating food intake and adiposity. Not only that, but the injury response and inflammation that developed when animals were placed on a fattening diet preceded the development of obesity, suggesting that this brain injury could have played a role in the fattening process.

Guyenet isn’t exactly sure what aspect of modern diets cause the injury:

Many researchers have tried to narrow down the mechanisms by which this food causes changes in the hypothalamus and obesity, and they have come up with a number of hypotheses with varying amounts of evidence to support them. Some researchers believe the low fiber content of the diet precipitates inflammation and obesity by its adverse effects on bacterial populations in the gut (the gut microbiota). Others propose that saturated fat is behind the effect, and unsaturated fats like olive oil are less fattening. Still others believe the harmful effects of overeating itself, including the inflammation caused by excess fat and sugar in the bloodstream and in cells, may affect the hypothalamus and gradually increase the set point. In the end, these mechanisms could all be working together to promote obesity. We don’t know all the details yet, but we do know that easy access to refined, calorie-dense, highly rewarding food leads to fat gain and insidious changes in the lipostat in a variety of species, including humans. This is particularly true when the diet offers a wife variety of sensory experiences, such as the hyperfattening “cafeteria diet” we encountered in chapter 1.

Personally, I believe overeating itself probably plays an important role in the process that increases the adiposity set point. In other words, repeated bouts of overeating don’t just make us fat; they make our bodies want to stay fat. This is consistent with the simple observation that in the United States, most of our annual weight gain occurs during the six-week holiday feasting period between Thanksgiving and the new year, and that this extra weight tends to stick with us after the holidays are over…because of some combination of food quantity and quality, holiday feasting ratchets up the adiposity set point of susceptible people a little bit each year, leading us to gradually accumulate and defend a substantial amount of fat. Since we also tend to gain weight at a slower rate during the rest of the year, intermittent periods of overeating outside of the holidays probably contribute as well.

How might this happen? We aren’t entirely sure, but researchers, including Jeff Friedman, have a possible explanation: excess leptin itself may contribute to leptin resistance. To understand how this works, I need to give you an additional piece of information: Leptin doesn’t just correlate with body fat levels; it also responds to short-term changes in calorie intake. So if you overeat for a few days, your leptin level can increase substantially, even if your adiposity has scarcely changed (and after your calorie intake goes back to normal, so does your leptin). As an analogy for how this can cause leptin resistance, imagine listening to music that’s too loud. At first, it’s thunderous, but eventually, you damage your hearing, and the volume drops. Likewise, when we eat too much food over the course of a few days, letpin levels increase sharply, and this may begin to desensitize the brain circuits that respond to leptin. Yet Rudy Leibel’s group has also shown that high leptin levels alone aren’t enough – the hypothalamus also seems to require a second “hit” for high leptin to increase the set point of the lipostat. This second hit could be the brain injury we, and others, have identified in obese rodents and humans.

And he isn’t sure exactly what aspect of the normal rodent diet promotes the healing:

I did do some research in mice suggesting that unrefined, simple food does reverse the brain changes and the obesity. I don’t claim that it’s all attributable to the blandness though– the two diets differed in many respects (palatability, calorie density, fiber content, macronutrient profile, fatty acid profile, content of nonessential nutrients like polyphenols). Also, we don’t know how well the finding applies to humans yet. One of the problems is that it’s very hard to get a group of humans to adhere strictly to a whole food diet for long enough to study its long-term effects on appetite and body fatness. People are very attached to the pleasures of the palate!

But all of this together seems to point to a potential synthesis between the hyperpalatability and lipostat models. Modern society has been incentivized to produce hyperpalatable, low-satiety food as superstimuli. Overeating this modern food in the short term raises the lipostat’s set point (for some reason, possibly involving brain damage and leptin resistance), causing us to gain weight in the long term, in a way that is very difficult to reverse.


But I still have trouble reconciling these two points of view.

A couple of days ago, I walked by an ice cream store. I’d just finished lunch, and I wasn’t very hungry at the time, but it looked like really good ice cream, and it was hot out, so I gave in to temptation and ate a 700 calories sundae. Does this mean:

1. Based on the one pound = 3500 calories heuristic, I have now gained 0.2 lbs. That extra weight will stay with me my whole life, or at least until some day when I diet and eat 700 calories less than my requirement. If I were to eat ice cream like this a hundred times, I would gain twenty pounds.

2. My lipostat adjusts for the 700 extra calories, and causes me to exercise more, or ramp up my metabolism, or burn more brown fat, or eat less later on, or something. I don’t gain any weight, and eating the ice cream was that rarest of all human experiences, a completely guiltless pleasure. I should eat ice cream whenever I feel like it, or else I am committing the sin of denying myself a lawful pleasure.

3. My lipostat will more or less take care of the ice cream today, and I won’t notice the 0.2 pounds on the scale, but it is very gradually doing hard-to-measure damage to my hypothalamus, and if I keep eating ice cream like this, then one day when I’m in my forties I’m going to wake up weighing three hundred pounds, and no diet will ever be able to help me.

4. The above scenario is impossible. Even if I think I just ate ice cream because it looked good, in reality I was driven to do it by my lipostat’s quest for caloric balance. Any feeling of choice in the matter is an illusion.

I think the reason this is so confusing is because the real answer is “it could be any one of these, depending on genetics.”

Note the position of the grey squares representing BMI

Right now, within this culture, variation in BMI is mostly genetic. This isn’t to say that non-genetic factors aren’t involved – the difference between 1800s America and 2017 America is non-genetic, and so is the difference between the perfectly-healthy Kitavans on Kitava and the one Kitavan guy who moved to New Guinea. But once everyone alike is exposed to the 2017-American food environment, differences between the people in that environment seem to be really hereditary and not-at-all-related to learned behavior. Guyenet acknowledges this:

Genes explain that friend of yours who seems to eat a lot of food, never exercises, and yet remains lean. Claude Bouchard, a genetics researcher at the Pennington Biomedical Research Center in Baton Rouge, Louisiana, has shown that some people are intrinsically resistant to gaining weight even when they overeat, and that this trait is genetically influenced. Bouchard’s team recruited twelve pairs of identical twins and overfed each person by 1,000 calories per day above his caloric needs, for one hundred days. In other words, each person overate the same food by the same amount, under controlled conditions, for the duration of the study.

If overeating affects everyone the same, then they should all have gained the same amount of weight. Yet Bouchard observed that weight gain ranged from nine to twenty-nine pounds! Identical twins tended to gain the same amount of weight and fat as each other, while unrelated subjects had more divergent responses…Not only do some people have more of a tendency to overeat than others, but some people are intrinsically more resistant to gaining fat even if they do overeat.

The research of James Levine, an endocrinologist who works with the Mayo Clinic and Arizona State University, explains this puzzling phenomenon. In a carefully controlled overfeeding study, his team showed that the primary reason some people readily burn off excess calories is that they ramp up a form of calorie-burning called “non-exercise activity thermogenesis” (NEAT). NEAT is basically a fancy term for fidgeting. When certain people overeat, their brains boost calorie expenditure by making them fidget, change posture frequently, and make other small movements throughout the day. It’s an involuntary process, and Levine’s data show that it can incinerate nearly 700 calories per day. The “most gifted” of Levine’s subjects gained less than a pound of body fat from eating 1,000 extra calories per day for eight weeks. Yet the strength of the response was highly variable, and the “least gifted” of Levine’s subjects didn’t increase NEAT at all, shunting all the excess calories into fat tissue and gaining over nine pounds of body fat…

Together, these studies offer indisputable evidence that genetics plays a central role in obesity and dispatch the idea that obesity is primarily due to acquired psychological traits.

These studies suggest that one way genetics affects obesity is by altering the tolerance level of the lipostat. Genetically privileged people may have very finicky lipostats that immediately burn off any extra calories they eat, and which never become dysregulated. Genetically unlucky people may have weak lipostats which fail to defend against weight gain, or which are too willing to adjust their set point up in the presence of an unhealthy food environment.

So, given how many people seem to have completely different weight-gain-related experiences to each other, the wishy-washyness here might be a feature rather than a bug.

One reason I’ve always found genetics so exciting is that there are all these fields – nutrition is a great example, but this applies at least as much to psychiatry – where everyone has wildly different personal experiences, and where there’s a large and vocal population of people who say that the research is exactly the opposite of their lived experiences. People have tried to shoehorn the experiences to fit the research, with various levels of plausibility versus condescendingness. And for some reason, it’s always really hard to generate the hypothesis “people’s different experiences aren’t an illusion; people are genuinely really different”. Once you start looking at genetics, everything sort of falls into place, and ideas which seemed wishy-washy or self-contradictory before are revealed as just reflecting the diversity of nature. People who were previously at each other’s throats disputing different interpretations of the human condition are able to peacefully agree that there are many different human conditions, and that maybe we can all just get along. The Hungry Brain and other good books in its vein offer a vision for how we might one day be able to do that in nutrition science.


Lest I end on too positive a note, let me reiterate the part where happiness is inherently bad and a sort of neo-Puritan asceticism is the only way to avoid an early grave.

There’s an underlying fatalism to the discourse around “food reward”. If the enemy were saturated fat, we could just stick with the sugary sweetness Coca-Cola. If the enemy were carbohydrates, we could go out for steak every night. But what do we do if the enemy is deliciousness itself?

A few weeks ago Guyenet announced The Bland Food Cookbook, a collection of tasteless recipes guaranteed to be low food-reward and so discourage overeating. It was such a natural extension of his philosophy that it took me a whole ten seconds to realize it was an April Fools joke. But why should it be? Shouldn’t this be exactly the sort of thing we’re going for?

I asked him, and he responded that:

If I thought enough people would actually be capable of following the diet, I would consider making such a cookbook non-ironically. The second point I want to make here is that there are many ways to lose weight, and deliberately reducing food reward is only one of them. You could also exercise, eat a low-calorie-density diet, eat a high-protein diet, restrict a macronutrient, restrict animal foods, restrict plant foods, eat nothing but potatoes. Most approaches overlap with a low-reward diet to varying degrees, but I don’t think the low reward value encapsulates everything about why every weight loss strategy works. BTW, low-carb folks often have a knee-jerk reaction to the low-reward thing that goes something like this: “I eat food that’s delicious, such as steaks, bacon, butter, etc. It’s not low in reward.” But it is low reward in the sense that you’re cutting out a broad swath of foods, and an entire macronutrient, that the brain very much wants you to eat. Eating more of a particular category of rewarding food doesn’t completely make up for the fact that you’re cutting out a whole other category of rewarding food that you would avidly consume if you weren’t restricting yourself.

So things aren’t maximally bad. And hunter-gatherers enjoy their healthy diets just fine. And certainly there are things like steak and wine and so on which are traditionally “good food” without being hyperprocessed hyperpalatable junk food. But if you really enjoy a glass of Chardonnay, is that “food reward” in the sense that’s potentially dangerous? Is anything safe? What about mongongo nuts? Is there anywhere we can get them?

Overall I strongly recommend The Hungry Brain for everything I talk about here and for some other good topics I didn’t even get to (stress, sleep, a list of practical real-world diet advice). I would also recommend Guyenet’s other writing, especially his debate with Dr. David Ludwig on the causes of obesity (Part 1, Part 2, Part 3. I also recommend the list of diet tips that Guyenet gives at the end of the book. I won’t give them all away here – he’s been nice enough to me that maybe I should repay him by not reprinting the entire text of his book online for free. But it’s similar to a lot of standard advice for healthy living, albeit with more interesting reasoning behind it. Did you know that exercise might help stabilize the lipostat? Or that protein might do the same? Also, one piece of advice you might not hear anywhere else – potatoes are apparently off-the-charts in terms of satiety factor and may be one of the single best things to diet on.

And speaking of good things to diet on…

(note that this next part is my own opinion, not taken from The Hungry Brain or endorsed by Stephan Guyenet)

Slate Star Codex’s first and most loyal sponsor is MealSquares, a “nutritionally complete” food product sort of like a solid whole-foods-based version of Soylent. I’m having some trouble writing this paragraph, because I want to recommend them as potentially dovetailing with The Hungry Brain‘s philosophy of nutrition without using phrases that might make MealSquares Inc angry at me like “bland”, “low food reward”, or “not hyperpalatable”. I think the best I can come up with is “unlikely to injure your hypothalamus”. So, if you’re looking for an easy way to quit the junk food and try a low-variety diet that’s unlikely to injure your hypothalamus, I recommend MealSquares as worth a look.

Read the whole story
147 days ago
Share this story
1 public comment
144 days ago
Boil 'em, mash 'em...

Lipstick for your other lips: Meet the man who wants you to glue your vagina shut

1 Share

Alex Casey interviews Doctor Dan Dopps, creator of a new vaginal adhesive that hopes to seal the menstrual product deal … literally. 

Bleeders, palm off your pads, trash your tampons and shoot your mooncup straight to the moon. There’s a brand new innovation in time-of-the-month technology called Mensez, a lipstick for your other set of lips that seals everything up down there like an Action Man when you are on your period, until you wee and all the blood evacuates somehow. Sorry, not sorry for the details – and a brief warning that things are going to get a lot more gnarly from here.

This menstruation innovation has been patented by Doctor Dan Dopps, a chiropractor in Wichita, Kansas. When someone first posted the Mensez website on our On the Rag Facebook page, I screamed and screamed and screamed until someone had to seal my mouth shut. Was it a joke? Would it actually work? Why a lipstick? Why us? Why anything?! I had to know more. I had to know everything. I had to … get on Skype with Dr Dan the Period Man himself.

I hope this isn’t too brash to begin, but is Mensez real or is this a Nathan For You hoax?

No, I am for real. Serious. I have a patent on this product and it is definitely for real.

As a woman who has the potential to use this product, can you give me the Shark Tank pitch?

It’s definitely serious and it’s about women. It’s not about me, it’s not about men, it’s about women and the issues that they have with their periods. It’s a tough subject to talk about, it’s taboo and a lot of women feel like men shouldn’t even be talking about it. It’s also an area in the modern world where there’s been no innovation in the last 80 years, you know? Nothing has changed.

[Editor’s note: Hello Mooncups in 2002, Hello Thinx Period Panties in 2008]

I am an innovator, a doctor and I like inventing. I came up with this idea and I think it’s very elegant. It’s going to work and it’s going to be so good for women. It’s not a glue, like so many have been saying. The labia is covered with a mucus membrane and they normally stick together a little bit. All we’re doing is enhancing that attraction so they cling together tight enough to retain the menstrual fluid inside the vagina, in the same place in a vagina that a tampon would be.

Obviously, you are a man of medicine so I feel like I can say this… I just feel like I would want the STRONGEST of glues if I am just going to just freestyle with nothing in there on my period.

Right. Well, it’s strong enough to do what we need but it’s not a superglue many women are afraid of that concept. Like you are saying you want to be secure, and I know that’s a big issue, but it’s not a superglue. The unique thing is that this glue does not react with blood, or sweat or perspiration, it only reacts with urine. When it gets wet with urine, it dissolves.

What if I do a little wee by accident? Am I going to be in big trouble?

You could be. On certain days, it may be a bad thing and it may not be an answer, just like tampons aren’t an answer for all women either. A girl will just have to test it and see if it works for her. Hopefully it will, I’m sure there will be lighter days where it will work just fine.

mfw do a little wee whilst wearing mensez

How are you so confident that this is going work?

Because of my background in chemistry, I’ve tested a lot of things not on women, because we’re not in clinical trials of any kind  I am confident the concept is there. It will take some product development beyond this point, but I was confident enough that the chemistry was going to work that I spent five years and a lot of money to get it patented.

Five years is no joke.

No joke, the patent office wasn’t going to give it to me because they were sure that it had been done before. They searched the world over for five years and found absolutely nothing like this. It is a unique idea. It’s really hard for women, being so used to the status quo, to even take this seriously because of the implications for them. The implications will be very good.

Some inspiration courtesy of the Mensez official website

How does a chiropractor get into the realms of menstrual innovation?

In my college education, I had OBGYN courses and I passed the national board exams. Even though I don’t practice, I know the anatomy and the physiology of female reproduction. I also have a next door neighbour who got Toxic Shock Syndrome from tampons and lost both her legs and seven of her fingers. Knowing her over the years, it’s always been in the back of my mind: why doesn’t someone innovate something new for women? To people who would say that this isn’t part of my specialty everyone knows that innovation comes from thinking outside of the box.

Or inside the box, as it were.

That truly is one of the reasons I believe that nothing has been done. Doctors have been taught to stay in their little corner and so they aren’t doing anything about it. It’s like you, you are a reporter but you might be a really good cook too.

I’m not, but I appreciate the thought.

Everyone has other talents outside of their job, you know?

Very true. Where did you get the idea to put the product inside a lovely lipstick for a lady?

Well, my patent covers different methods of application. It could be a spray or a cream, but I think a lipstick is familiar to women. They know how to use it, they know what it is and it’s about the right consistency of the compound we are proposing to use. It could be in a powder or it could be applied by a mini-panty liner where it would transfer on. But the idea of a lipstick just fits perfectly: it’s just sticking the lips together.

we women love lippy + elegance

I have read that you think women waste 25% of their productivity on periods. That seems like a lot.

I didn’t really mean to say it that way. I just meant that it’s a distraction for women about 25% of the time. Their life just isn’t normal. I use the analogy of playing a football game, and in the fourth quarter of every game the woman is distracted and not playing as well as she could. You aren’t going to win all the games that way.

Let’s say this lipstick idea takes off and women use it and get our 25% focus back, what would you hope we do with the extra time?

Have fun. Be women. Don’t we all just want more time to do the things we love? I don’t mean this in a misogynistic way do you know I didn’t even know what that word meant until about a month ago when someone called me that? I am certainly doing this with women in mind, I won’t ever use it but I think I can help a lot of women.

Do you have volunteers lining up to test the product? How confident are you that they could wear white pants when they test this out?

Oh I’m totally confident. I would suggest when a woman first tries this that she wears some kind of liner for extra security. There are variations in anatomy that may not work for some women, and I’m sure it won’t work for everyone. You’ll just have to build your confidence with it. It’s a very small, concentrated amount of blood, it just looks like a whole bunch. I’ve had thousands of women emailing me saying they want to try it out. 

Where does Mensez go from here?

We’ve had a number of companies contact us, and we are trying to find a good fit. We want someone with the ability to produce it and bring it to market, someone with a good research and development lab. It’s probably going to take a few years from this point to get into consumer’s hands, but I think it will happen.

when mensez hits the shelves…

I read an interview where you mentioned this was just the latest in many patents you own. What else have you invented?

I have an invisible UV paint company, that’s kind of fun. It’s a paint that you spray on anything and it’s invisible until you shine a UV light on it. It’s a fun thing for kids and college students and Hollywood and Governments. I have a patent on a water bottle cap, and a patent on a resealable snack bag. Those are some of the recent ones. I just like innovating, that’s just what I do.

Have you thought about merging the resealable snack bag, the invisible UV paint and the vagina lipstick into one product?

No, I’ll think I’ll leave that one to you.

The Society section is sponsored by AUT. As a contemporary university we’re focused on providing exceptional learning experiences, developing impactful research and forging strong industry partnerships. Start your university journey with us today.


Read the whole story
171 days ago
Share this story

Dissecting Trump’s Most Rabid Online Following

1 Share

Editor’s note: The story below contains two slurs that appear in the names of subreddits. Links to Reddit may also contain offensive material.

President Donald Trump’s administration, in its turbulent first months, has drawn fire from both the left and the right, including the ACLU, government ethics accountability groups and former Bush administration officials. But one group has shown nothing but unbridled enthusiasm for the president’s actions thus far: the over 380,000 members of r/The_Donald, one of the thousands of comment boards on Reddit, the fifth-most-popular website in the U.S.

The subreddit, where posters refer to President Trump as the “God Emperor” and “daddy,” is arguably the epicenter of Trump fervor on the internet. Its membership has grown steadily since the 2016 presidential election, though its members were especially active during the campaign. They mobilized to comb through the hacked Democratic National Committee emails published on WikiLeaks and played a large role in spreading information and theories about those emails. More broadly, they waged the “Great Meme War”: an effort to get Trump elected by bombarding the internet with social-media-ready content promoting Trump or bashing Democratic candidate Hillary Clinton. Some of those memes played on Clinton’s campaign gaffes, such as her use of the phrase “basket of deplorables,” while others involved an emerging pro-Trump iconography centered around images of Pepe the Frog — a cartoon character with a convoluted history that gained especial prominence after it was co-opted by white nationalists as a sort of unofficial mascot. Members of r/The_Donald like to say they “shitposted” Donald Trump into office; regardless of whether the flood of memes swung the election, it did overwhelm the front page of Reddit to such an extent that the site’s CEO rushed to deploy a change in Reddit’s algorithm that limits the influence of any single subreddit.

What can we say about the animating force behind r/The_Donald? For one, it’s not universal among Trump supporters; nearly 63 million Americans voted for Trump, and the 382,000 members of r/The_Donald represent less than 1 percent of that. But in the subreddit’s vocal and dedicated membership, you can find an influential strain of Trump boosterism. According to former staffers, the Trump campaign team monitored the subreddit for messages that resonated, and Trump himself participated in an “Ask Me Anything” on r/The_Donald in July. Since the election, the subreddit has continued to serve as a conduit through which fringe conspiracy theories — often started on sites like 4chan.org, a freewheeling image-based message board best known for creating memes, posting stolen celebrity nudes and birthing the hacker collective Anonymous — enter a larger online discourse. The most striking example has been “Pizzagate,” the false idea that a pizza parlor in Washington, D.C., is the center of a child-trafficking ring involving Clinton campaign manager John Podesta, which prompted a man from North Carolina to “self-investigate” the shop, where he fired a rifle several times and threatened an employee.

r/The_Donald has repeatedly been accused of offering a safe harbor where racists and white nationalists can congregate and express their views, much the same way that Trump’s campaign is said to have mobilized and emboldened those same groups. And indeed, r/The_Donald is home to some pretty vile comment threads. The subreddit’s moderators declined to talk to us about their community and accused FiveThirtyEight of being “fake news.” Regardless, we think there’s a way to get at the nature of r/The_Donald that is more rigorous than doing a quick scan of its comments (and certainly more objective than simply soliciting the opinions of the group’s fans and detractors).

We’ve adapted a technique that’s used in machine learning research — called latent semantic analysis — to characterize 50,323 active subreddits based on 1.4 billion comments posted from Jan. 1, 2015, to Dec. 31, 2016, in a way that allows us to quantify how similar in essence one subreddit is to another. At its heart, the analysis is based on commenter overlap: Two subreddits are deemed more similar if many commenters have posted often to both. This also makes it possible to do what we call “subreddit algebra”: adding one subreddit to another and seeing if the result resembles some third subreddit, or subtracting out a component of one subreddit’s character and seeing what’s left. (There’s a detailed explanation of how this analysis works at the bottom of the article).

Here’s a simple example: Using our technique, you can add the primary subreddit for talking about the NBA (r/nba) to the main subreddit for the state of Minnesota (r/minnesota) and the closest result is r/timberwolves, the subreddit dedicated to Minnesota’s pro basketball team. Similarly, you can take r/nba and subtract r/sports, and the result is r/Sneakers, a subreddit dedicated to the sneaker culture that is a prominent non-sport component of NBA fandom.

This may all seem pretty abstract, but that same algebra can be applied to r/The_Donald. What happens when you break r/The_Donald up into subgroups using subreddit subtraction? What happens when you add unrelated subreddits to r/The_Donald? Before we get into those questions, let’s take a look at the subreddits that are most similar to r/The_Donald, according to our analysis:

r/Conservative and r/AskTrumpSupporters top the list, followed by r/HillaryForPrison, a subreddit that refers to Hillary Clinton by the pronoun “it” and notes in bold on the sidebar that “Putting It behind bars is fun!” After that it’s r/uncensorednews, a subreddit started by white nationalist moderators who found the existing, extremely popular r/news subreddit to be too liberal.

So does this mean that users who comment on r/The_Donald comment on r/Conservative more than any other subreddit? No. Eight percent of r/The_Donald’s users have also commented on r/Conservative, which is about one-fifth the size of r/The_Donald, and conversely, 51 percent of commenters on r/Conservative have commented on r/The_Donald. But the raw number of shared commenters isn’t very informative on its own because, for example, almost every subreddit will have a lot of overlap with big, really popular subreddits such as r/AskReddit, which has over 16 million members. Our analysis is a bit more subtle: We weight the overlaps in commenters according to, in essence, how surprising those overlaps are — that is, how much more two subreddits’ user bases overlap than we would expect them to based on chance alone. Since essentially every subreddit overlaps heavily with super popular groups like r/AskReddit, that result is no longer surprising and gets a lower weight. What rises to the top, then, are the more unlikely results that are characteristic of a specific subreddit rather than those that are common to Reddit as a whole. And by looking at these weighted commenter overlap rankings across thousands of subreddits, we built a profile for each subreddit that helps capture what defines the average commenter on each specific subreddit.

There’s nothing too revealing in that list above — all of those subreddits are explicitly pro-Trump, anti-Clinton or politically conservative. So let’s use subreddit algebra to dissect r/The_Donald into its constituent parts. What happens when you filter out commenters’ general interest in politics? To figure that out, we can subtract r/politics from r/The_Donald. The result most closely matches r/fatpeoplehate, a now-banned subreddit that was dedicated to ridiculing and bullying overweight people.

r/The_Donald r/politics =

1.r/fatpeoplehate0.275For sharing insults aimed at overweight people (now banned)

2.r/TheRedPill0.274Virulently misogynistic subreddit, nominally devoted to “sexual strategy”

3.r/Mr_Trump0.266Now-dormant subreddit formed during a moderator schism at r/The_Donald

4.r/coontown0.266Open and enthusiastic racism against black people (now banned)

5.r/4chan0.253Screenshots of 4chan.org posts

Subreddit algebra isn’t quite as simple as A – B = C. It’s more like A – B is closer to C than anything else, but it’s also pretty similar to D and not far off from E. So when you subtract r/politics from r/The_Donald, you actually get a list of every subreddit in our analysis, ranked in order of their similarity to the result of that subtraction. We’re showing just the top five.

And that top five isn’t exactly pretty, though it does support the theory that at least a subset of Trump’s supporters are motivated by racism. The presence of r/fatpeoplehate at the top of the list echoes some of President Trump’s own behavior, including his referring to 1996 Miss Universe winner Alicia Machado as “Miss Piggy” and insulting Rosie O’Donnell about her weight. The second-closest result, r/TheRedPill, describes itself in its sidebar as a place for “discussion of sexual strategy in a culture increasingly lacking a positive identity for men”; named after a scene from the “The Matrix,” the group believes that women run the world and men are an oppressed class, and from that belief springs an ideology that has been described as “the heart of modern misogyny.” r/Mr_Trump self-describes as “the #1 Alt-Right, most uncucked subreddit” — referring to a populist white-nationalist movement and an increasingly all-purpose insult meant to denigrate others’ masculinity — and the appallingly named r/coontown is the now-banned but previously central home to unrepentant racism on Reddit. Finally, coming in at No. 5 is r/4chan, a subreddit dedicated to posting screenshots of threads found on 4chan, where many users supported Trump for president and where the /pol/ board in particular has a strongly racist bent.

We dissected r/The_Donald in a bunch of other ways using subreddit algebra. Here are some of the more interesting results:

r/The_Donald r/conspiracy =

1.r/CFB0.269For college football discussion

2.r/nfl0.255For NFL discussion

3.r/TrumpMinnesota0.244Small subreddit for Trump supporters in Minnesota

r/The_Donald + r/europe =

1.r/european0.781Now-private subreddit that hosted racist and anti-Semitic commentary on European affairs

2.r/worldnews0.768Main subreddit for discussion of world affairs

3.r/syriancivilwar0.688For discussion of the conflict in Syria

r/The_Donald + r/Games =

1.r/KotakuInAction0.676Main hub of Gamergate discussion on Reddit

2.r/gaming0.619Largest general gaming subreddit

3.r/Cynicalbrit0.586Unofficial fanpage for the internet personality TotalBiscuit

So even adding innocuous subreddits, such as r/europe and r/Games, to r/The_Donald can result in something ugly or hate-based — r/european frequently hosts anti-Semitism and racism, while r/KotakuInAction is Reddit’s main home for the misogynistic Gamergate movement. Which raises a question: Are these hateful communities linked specifically to Trump’s supporters on Reddit, or are they common to politically active Reddit users in general? To get at that question, let’s try subtracting r/politics from r/conservative:

r/Conservative r/politics =

1.r/Mary0.265Subreddit for devotees of the biblical Mary

2.r/RCIA0.264For those considering converting to Catholicism (RCIA means “rite of Christian initiation for adults”)

3.r/ak470.241For discussing the AK-47 rifle

4.r/TelaIgne0.240A space where Catholic redditors pray for other redditors (the name is Latin for “web on fire”)

5.r/ChristianJewishRoots0.240For discussion of the relationship between Christian and Jewish theology

When we do this, we find that the top result is a subreddit dedicated to the glorification of a biblical Mary, and the other related subreddits are similarly focused on Christianity, except for r/ak47, which is dedicated to the famous rifle.

So what about the other 2016 presidential candidates? How does Trump’s Reddit following compare to that of Hillary Clinton or Democratic primary candidate Bernie Sanders (whose r/SandersForPresident subreddit still has over 215,000 members)? This analysis lets us take any subreddit and say how “Trump-ish” it is vs. how “Clinton-ish” or “Sanders-ish” it is. Here’s a selection of subreddits plotted on a three-way spectrum from r/The_Donald to r/SandersForPresident to r/hillaryclinton.

Subreddits dedicated to politics and news are smack in the middle. r/Feminism is on the Sanders/Clinton side of the spectrum, though slightly closer to Clinton, as is r/TheBluePill, a feminist parody of r/TheRedPill; r/BasicIncome (a subreddit advocating for a universal basic income) is also on the liberal side, though slightly closer to Sanders.

And all of those hate-based subreddits? They’re decidedly in r/The_Donald’s corner.

How does this work?

Latent semantic analysis (LSA) — the technique from natural language processing that we’ve adapted for this analysis — is often used to determine how related one book, article or speech is to another. The basic idea is that documents using similar words with similar frequency are probably closely related. But what about the words themselves? LSA also allows you to assess how similar words are by looking at the other words that show up around them. So, for example, two words that might rarely show up together (say “dog” and “cat”) but often have the same words nearby (such as “pet” and “vet”) are deemed closely related. The way this works is that every word in, say, a book is assigned a value based on its co-occurrence with every other word in that book, and the result is a set of vectors — one for each word — that can be compared numerically. On a very technical level, the way you determine how similar two words like “dog” and “cat” are is by looking at the angle between their two vectors (there’s a visual guide to understanding these concepts below).

Vectors are interesting because they can be enormous, multidimensional things that contain a huge amount of information — but you can still use them to do grade-school arithmetic. When machine-learning researchers at Google tried adding word vectors together or subtracting one from another, they discovered semantically meaningful relationships. For example, if you take the vector for “king,” subtract the vector for “man” and add the vector for “woman,” the closest result is the vector for “queen.” Slightly more subtle relationships were also exposed: e.g. “Rome” plus “Germany” equals “Berlin.” It turned out to be a very powerful way of analyzing language. Here, we are also using co-occurrence to try to uncover the nature of different subreddits and their relationships to one another.

The idea of co-occurrence is clear when we’re talking about words, but what does it mean for subreddits? We found relationships by looking at how many commenters various subreddits have in common — that’s our measure of co-occurrence. Here’s a simplified example of how this works:

Let’s say we want to see how subreddits in the world of health and exercise are related to one another. To do that, we can plot every subreddit in terms of two key subreddits — r/nutrition and r/Outdoors

Let’s start with r/running. That subreddit has, let’s say, one commenter who has also commented in r/nutrition and three who have also commented in r/Outdoors. So we give it a vector of [1,3]

Now let’s add two more subreddits: r/weightlifting and r/Fitness. r/weightlifting has three commenters in common with r/nutrition and one with r/Outdoors, and r/Fitness has four and three, respectively.

Now we can do some addition by combining the vectors. If we add r/weightlifting to r/running, we get a third vector that looks similar to r/Fitness. The angle between the two gives us a measure of just how similar.

So instead of (King – Man) + Woman = Queen, you get Running + Weightlifting = Fitness.

For over 50,000 subreddits that span a huge range of topics, it gets a bit more complicated. Instead of characterizing all of them in terms of just two subreddits — like r/Outdoors and r/nutrition above — we ranked all of the subreddits by the number of unique commenters and then pulled out the 2,133 subreddits whose unique commenter rank was between 200 and 2,201 (there are some ties). We used this subset of subreddits to characterize all active subreddits. We then combined all the resulting subreddit vectors into a big matrix with 50,323 rows and 2,133 columns and converted the raw co-occurrences to positive pointwise mutual information values. Similarity between subreddits is based on the cosine similarity of their vectors — a measure of the angle between them. To perform subreddit algebra, subreddit vectors are added and subtracted using standard linear algebra, and then the cosine similarities are calculated to rank subreddits by their similarity to the combination.

Are we sure this is meaningful?

To test our analysis, we looked at some cases of subreddit algebra where the results should be obvious — like the example above where adding r/nba to r/minnesota should (and does) yield r/timberwolves as the best fit. Other combinations of a sport and a location similarly result in location-specific discussions of that sport.

We also looked at a test case involving a harder-to-see relationship. If you take the subreddit for managing money and investing, r/personalfinance, and subtract the subreddit for frugality, r/Frugal, the resulting most similar subreddit is r/wallstreetbets, a subreddit about taking extreme risks in the stock market.

The data and code behind this analysis

The Reddit comments data is from a collection hosted on Google’s BigQuery of 1.4 billion comments from January 2015 to December 2016. The analysis itself was done in R. You can find the code here.

Development by Justin McCraw

Read the whole story
180 days ago
Share this story

Perineal agriculture?

1 Share

Jack Maloney sent in a link to a talk at the University of Kansas Biodiversity Institute about "Plant Soil Microbiomes in Perineal Agriculture":

Switching from an annual agriculture system to a perineal agriculture system that most closely resembles natural prairies will include changes to the way we manage soil, the lifespan of the plants, and the diversity of the crops there. KU Assistant Professor of Ecology and Evolutionary Biology Ben Sikes will talk about how each of these changes will influence diseases and the beneficial partners that live in the soil.

Presumably "perineal" in this context is a Cupertino for "perennial". Jack's comment:

'Perineal agriculture': not a subject I even want to think about, much less attend a lecture about!

The obligatory screenshot:

Read the whole story
226 days ago
Share this story
Next Page of Stories