What if the LHC was responsible for a zombie apocalypse?
They must have had barrels of fun making this.
@pretendy-blog / pretendy-blog.tumblr.com
What if the LHC was responsible for a zombie apocalypse?
They must have had barrels of fun making this.
This composite picture, put together by images from Japan's Alos satellite shows intensely worked farmland in the Black Earth region of cropland, south of Moscow.
Three images, taken at different times of the year, and each coloured red, green and blue are superimposed upon one another. Strongly coloured parts of the picture reveal changes in the Earth's surface over the year.
As well as detect these changes, subtle differences between different colour filters can also tell us what the crops are themselves.
The questions 'how many even numbers are there?' and 'how many natural numbers are there?' would both seem to have the answer 'infinity'. Our intuition would tell us that there are twice as many even numbers as natural numbers (as for every even number, there is also an odd number too) and we rationalise this internally by thinking "2 x ∞ = ∞".
However, is it really justified to assert that there are twice as many naturals as evens? Actually it turns out that it isn't. This can be proven quite easily by arranging them into sets. Let set A contain all the natural numbers and B contain the even numbers.
Two sets contain the same number of elements when you can construct a one-to-one mapping between elements of each set, such that every element is paired only to one other element and no elements are left over. For example, the set {dog, table, fork} can be proven to contain the same number of elements as the set {cat, chair, spoon} through the mapping:
We have now formalised the notion of counting. This makes it extraordinarily easy for us to assess the relative sizes of very large and possibly infinite sets. By finding a neat way to construct a mapping from set to set, we've abolished the need to explicitly count out each element on our fingers and toes.
With the sets A and B we can now construct the following mapping:
This should be enough to convince you that the sets A and B are identical in size and there are just as many natural numbers as evens!
Now let's ask the question, 'which set is bigger: the set of natural numbers, N, or the set of rational numbers, Q?'. Q is the set of all numbers that can be written as a fraction of integers, ie. 1/2, 321/45, 19/1238912 are all members of Q. 'There are an infinite number of rational numbers just in the range [0:1] and so for every member of N there are ∞ members of Q so of course they must far exceed the naturals!' - would be the rational intuition of any human being. But again, it can be shown that Q and N are the same size.
All of the rational numbers can be written out in an infinite table:
With a little bit of thought, you can see that every rational number you can think of will be contained in the above table somewhere. This represents the table of all elements of Q. With some more thought you can see that by following the path shown on the right, you will traverse the whole table, reaching any given element in a finite number of steps.
In otherwords, for every member of Q, there corresponds a unique member of N: we have constructed our one-to-one mapping and shown that the rational numbers are 'countably infinite' like the natural numbers.
Don't start getting the idea that all infinite sets are the same, though. There are infinte sets whose elements are 'uncountably infinite' - where no mapping (to N) exists which ensures that any given element may be counted within a finite number of steps. One such set is that of the real numbers, R. This set contains all the rational and irrational numbers. An irrational number is one that cannot be written in the form a/b, meaning that it has an infinite decimal expansion. Examples of irrational numbers include:
Let's now imagine that there was some mapping that paired up each member of N with that of R. This might be some complicated map that wasn't as obviously ordered as the one before. Imagine that it looked something like this:
A neat argument from Georg Cantor shows that such a mapping cannot exist. You have to imagine that this table (like the one for Q) contains every member of R precisely once. We can now take the ith digit of the ith number in the list, add 1 to it, and add it to a new number that we construct:
The number we have constructed, 1.394125794..., is an irrational number that differs from every number in the list, but the list was meant to contain every irrational number in the first place! This proves by contradiction that no such mapping can have ever existed at all!
Now this is really weird. Before, it was quite easy to just vaguely say 'Well, infinity is infinity so anything that is infinite is just as infinite as anything else that is infinite...' but we now realise that infinity itself is a much more rigid concept, and there are different levels of infinity. The set of real numbers is necessarily larger than both the set of naturals and the set of rationals.
Freaked out? It might be worth mentioning that Cantor, who most of this work is attributed to, was institutionalised for most of his later years, and died in the sanitarium in 1918.
(Photo: Marek Uliasz)
*kinda
When I was 15 I loved editing my MySpace page (RIP). I was never a pro at web design and so creating html and CSS from scratch was out of the question. Instead, I would find a pre-made layout and treating it like a kind of Rosetta Stone, I'd proceed to edit it as much as I could. Though at that time I could barely do anything beyond change a font colour and fix a background image, when I finally had my page as I wanted it I felt like I had hacked into the Pentagon.
I was sad when MySpace flopped (I remember getting Facebook for the first time and thinking "So where do I edit the html?") but when I found Tumblr I rediscovered my joy for making things look pretty on the internet.
Now that I feel much more confident with computer code, I took on the challenge of constructing a Tumblr theme from scratch - something which I've always wanted to do.
The first thing I did was consult the Tumblr docs tutorial. They give a very handy skeleton markup for how a html document of a theme should be structured. When you make a post, the information gets stored in a database and can be accessed by variables in the html. For instance, your description is pulled from the server and placed in a document using the variable <p>{Description}</p>. Other variables include things like {Title}, {Quote}, {NoteCountWithLabel} et.c.
My starting point was the skeleton markup which sets up the mechanics of a page - blog name, description and sequential list of recent posts. In case you think this is cheating, here is what my blog looked like at that point:
Not ideal.
My next move was to begin styling. I did this by utilising the beautiful Twitter Bootstrap, which is an extensive and powerful framework that seemed perfect for what I wanted to do. At its heart is a large CSS file that defines many classes such as navigation bars, drop down menus and buttons.
Instead of copying and pasting into my document the whole CSS file, I laid out all the <div> elements the way I wanted them, and whenever I included an element with a new class, I placed the CSS code for that class into the document.
Twitter Bootstrap uses a 12-column fluid grid system (widths of columns adjust to the width of the browser) and after playing around a little I decided to use a 9-3 split where 75% of the width is given to my posts and 25% to my sidebar.
One of the most distinctive elements in the Bootstrap is the so called '.hero-unit', a chunky banner that serves as an eye-catching welcome to a website:
Quite liking this, I decided to use an edited version for the banner at the top of my page. Keen followers might notice the image from a previous post of mine. It's a sketch I made in Processing.
Having got the main layout of the page in place all I had to do was iron it out:
Finally, you may notice that if you're on one of the 'Home', 'About' or 'Message' tabs, that tab appears activated. This wasn't possible to do just through Bootstrap, which can only activate elements statically, that is, different html would be required on each page to activate that page's button. While on the Home page, the Home button's class attribute is "active" while the others are "":
To change the class attribute from "" to "active" dynamically when a page is accessed, I had to write a small bit JavaScript for a client to assess which page it's on and change the class attributes accordingly:
And with that, my blog was/is finally(ish) done. A couple things I might change/add are a search bar in the nav at the top, and pagination in the sidebar.
I'd also welcome any thoughts or suggestions!
If you study physics or engineering, chances are that looking at the above formula gives you painful flashbacks of either trying to prove in an exam that it's its own inverse, or using it to program a Wiener filter (or in my case, both). But while the Fourier Transform is on the one hand an incredibly useful tool in data/image/audio analysis, it also provides some beautiful explanations for why the world is as it is.
Dubstep (wub wub wub)
Whether or not cooking with Skrillex is your thing, pretty much all music is made to sound better by the Fourier Transform. For the uninitiated, it transforms a function of time into a function of frequency. Think of a graphical equaliser - that is the Fourier Transform of a song as you listen to it in real time. It takes the waveform (which is a function of amplitude against time), and decomposes it into its constituent frequencies. It shows you how much bass, middle and treble there is at any given time, and this lets music producers have their way with your ears by altering the amplitude at different frequencies, or in other words, filtering it.
Dr. Dre is a big fan of the work of Joseph Fourier
If you've ever used photoshop then you might have used a Gaussian blur. What it's mathematically doing is treating your image as a 2D grid of numbers, and convoluting it with a 2D Gaussian function. A single bright spot will spread out into a dim blur like this:
Now, if you have an image that has been blurred, knowing - or guessing - the function it's been blurred by (doesn't have to be digital: a camera lens is an analogue convoluting function) allows you to deconvolute the image, not too dissimilar to the zoom-enhance shtick you see in many sci-fi's. This deconvolution can only really be performed in practice through the use of the Fourier Transform (in particular, by utilising the convolution theorem).
In fact, the Fourier Transform is used ubiquitously in data analysis, signal processing, and image/video/audio enhancement due to its ability to work magic on the crappy file you have and make it better and clearer. In fact I'm pretty sure it's actual witchcraft.
Heisenberg's uncertainty principle (wub wub wub?)
While the Fourier Transform is useful for many practical applications, it has a very profound impact on the nature of reality itself. This is all to do with the relationship it draws between its two variables.
Above, you can see it equates a function of frequency, ω, with a function of time, t. These variables are known as Fourier conjugates, and they're flip sides of each other. You can't just choose any variable you like to be on the left hand side; a transform of a function of time is always a function of frequency. However there are other pairs of variables you can use.
One such pair of Fourier conjugates are the vectors space, r, and wavenumber, k - the wavenumber is the reciprocal of wavelength, and turns out to be a more natural measure of the spatial variation of a wave.
One of the weirdest results in quantum physics is the Heisenberg Uncertainty Principle, which states that for a given particle (or more formally, a quantum system), certain pairs of variables are unknowable with arbitrary precision. For instance, a particle cannot have both an exact position and an exact momentum. The more precisely its position is defined, the more uncertain its momentum is and vice versa. It's not a measurement problem but an inherent property of nature itself. Why?
Back to the Fourier Transform... It turns out that the transform of a sharp thin spike, is a spread out 'blurry' hill like the Gaussian above. Therefore, a function representing a very well defined position (or time), such as a sharp thin spike, would correspond to a delocalised, spread out function representing wavenumber (or frequency). In quantum mechanics, momentum and wavenumber are essentially the same (multiplied by a constant), and therefore Heisenberg's Principle must necessarily hold true!
In fact, there also exists a Heisenberg Principle for time and frequency. In quantum mechanics, frequency is interchangeable with energy (again, multiplied by the same constant) and therefore the energy of a particle is uncertain over arbitrarily small time-frames. This allows particles in the quantum regime to 'borrow' enough energy to tunnel through a potential barrier so long as they pay it back in a small enough timeframe to be in keeping with Heisenberg.
Joseph Fourier is a big fan of the work of Dr. Dre
After spending over a decade playing the empire-building strategy game, Civilization II, a redditor called Lycerius has possibly gone as far into the future in the game as anyone ever has. What does the world look like in 3991 AD?
It's not good. He reports that the world has spent over 1700 years locked in a deadly stalemate between the only three remaining nations - the Vikings, Americans and Celts. "The world is a hellish nightmare of suffering and devastation", he says.
Repeated melting of the icecaps due to nuclear war has rendered all but the highest mountains infertile swampland, irradiated and useless for farming. As a result, there are no large cities and the world's population is 10% of what it was at its peak around 2000.
A never ending war saps what little resources are left; constant fighting requires all efforts to go into building roads to keep the front lines well supplied. Lycerius, who plays as The Celts, is at odds with how to rebuild the world. Ceasefires don't last long enough to begin the slow process of improving living conditions.
In Civilization II, the player chooses from 21 different factions and controls every aspect of their empire's development. Players engage in international diplomacy, scientific advance, and civic improvement in order to advance through the game.
One thing I found interesting was that Lycerius really wanted to keep his nation a democracy, but in the end was forced to become a communist totalitarian state in order to keep up with the theocratic Vikings and Americans:
I wanted to stay a democracy, but the Senate would always over-rule me when I wanted to declare war before the Vikings did. This would delay my attack and render my turn and often my plans useless. And of course the Vikings would then break the cease fire like clockwork the very next turn.
Why is the world locked in such a bitter and hopeless stalemate? Once you (and all the other nations) max-out on technology to develop, there is nowhere else to go and no faction has the advantage. Everything is balanced out and a never-ending war ensues.
Is this a possible hint as to what might be in store for our real future? While this is by no means an exhaustive academic study into the topic it does parallel many ideas already in place in our culture. The three-nation everlasting war scenario is chillingly close to the setting of the novel Nineteen Eighty-Four, where the three superpowers Oceana, Eurasia and Eastasia battle it out in a causeless eternal war. It also shows how extreme global circumstances can stoke the fires of totalitarianism, where a lack of regard for the welfare of citizens can enable such a state to have the upper hand in a long and drawn out conflict.
Thankfully, humanity prevails and so reddit has already spawned a community of Civilization gamers to share around Lycerius' savegame file and try to find a solution for peace. Whatever they come up with, maybe we can all learn something from too?
In his article A Nice Cup of Tea, the author George Orwell gives step by step instructions to the reader on how to brew a cup of tea that will make the drinker "feel wiser, braver or more optimistic".
However, his guide is quite descriptive, and if you're a quantity-obsessed analytic person like me and you would prefer a cooking recipe that tells you to add "25g of chopped parsley" rather than "a handful of chopped parsley", then I'm afraid that Mr Orwell's treatise won't be of much use...
But help is at hand! In 1980, sub-committee 8 of technical committee 34 of the International Organization for Standardization (ISO) published a standard method (codenamed ISO 3103) for the brewing of tea. If you're unsure when to add the milk (ie if you're one of those people who put the milk in first) then all you need to do is consult ISO 3103:
Now it may seem like a bit of a waste of time for a serious scientific committee to embark upon the task of quantifying tea, however this standard does have some industrial and scientific merit. For taste tests (say, in product control or psychology experiments) it is important for the tea to be brewed to such a standard for meaningful sensory comparisons to be made. ISO establishes a standard for doing just so, and has a chuckle along the way.
Notice that ISO 3103 makes no mention of sugar. For all the tea-sugarers out there, I'll wrap up with this quote from Orwell:
But still, how can you call yourself a true tealover if you destroy the flavour of your tea by putting sugar in it? It would be equally reasonable to put in pepper or salt. Tea is meant to be bitter, just as beer is meant to be bitter. If you sweeten it, you are no longer tasting the tea, you are merely tasting the sugar; you could make a very similar drink by dissolving sugar in plain hot water. To those misguided people I would say: Try drinking tea without sugar for, say, a fortnight and it is very unlikely that you will ever want to ruin your tea by sweetening it again.
(top photo: macalit)
In a year that has already seen announcement of plans to mine asteroids, the first privately funded spacecraft docking with the ISS, and engine testing on the Skylon project, plans for an even more ambitious project have emerged.
The Mars One program has announced a thorough road map detailing its plan to establish a human colony on the surface of the Red Planet by 2023.
How will this be paid for? They plan to turn the Mars One project into the largest media event ever by coupling the astronaut selection and training process to a reality TV show!
You can read their plan, but here is a summary:
Such ambitious plans as this have usually fallen at one of the first two hurdles: funding and technology. However as we have seen with SpaceX, private firms have a great capacity for cash sourcing, and with the plan to turn the whole thing into a worldwide media event, Mars One might be well on their way to purchasing everything they need.
Technology wise, they have been very clever, and designed their whole plan around things which already exist and can be bought. In fact, all the landers and launchers are hoped to be supplied by SpaceX. This is great because it means that part of the money Mars One will be spending to explore space, will be spent by a company trying to explore space! This is exactly what we need for the space industry to flourish.
I really hope that this project gets going. They face some major hurdles - cosmic radiation poses a major problem for interplanetary voyages - but I'll be waiting with bated breath to see how far this thing gets.
Good luck, Mars One!
In this talk, Martin Hanczyc outlines a series of experiments where artificial protocells synthesised from oil and clay display primitive kinds of behaviour associated with life.
He begins by framing his working assumptions - that there exists a continuum between the living and the non-living - and identifies a few key features of a living system: a self-contained body, working metabolism, and inheritable information. The body coupled with metabolism allows an organism to move and interact with its environment, and all three together allow for replication and evolution.
While a cell might contain on the order of 1,000,000 different kinds of molecules, he was able to synthesise "life-like" protocells from just five. Oil disassociates with water and forms globules. These oil globules make up his protocell bodies, while a type of chemically active clay forms the basis of a metabolic system - extracting energy from the environment in order to "do something". What can his protocells do? He shows us a few neat videos:
These are really cool experiments. Though his humble artificial organisms are by no means Frankenstein's monster, they go a long way to help us understand what questions we should be asking about what makes something living as opposed to non-living. They show that certain fundamental properties of the complex life we see around us can be observed in relatively simple chemical systems.
In the middle of the night on 8 July 1962, the sky above Honolulu lit up bright orange through heavy cloud as the city went black. This was the first of a series of five tests carried out by the US during 1962 as part of Operation Dominic.
Starfish Prime was the detonation of a 1.4 megaton nuclear warhead at an altitude of 400 km - right in the mid-reaches of the ionosphere. The bomb, releasing an energy equivalent to 100 Hiroshimas, generated a gigantic electromagnetic pulse, wiping out much of the blast research instrumentation and extinguishing streetlights all over the Hawaiian capital, 1,400 km away.
A fleet of rockets were launched packed with measuring equipment to try, amongst other things, to put a gauge on the size of the radio blackout zone created by nuclear blasts.
X-rays produced by the detonation ionised a large portion of the surrounding atmosphere resulting in the excitation of high-energy β-particles. This residual radiation proved to be a terrible unforeseen after-effect. When charged particles become trapped in the upper atmosphere, they spiral around magnetic field lines and 'bounce' between the poles - an effect known as magnetic mirroring.
The β-particles from Starfish Prime formed a radiation belt around the Earth which reportedly persisted for five years. In that time, it was responsible for disabling around one third of all the low Earth orbit satellites, including the famous Telstar which, by coincidence, was due to launch the day after Starfish Prime.
Light travels at a speed of 299,792,458 metres per second exactly. No matter how fast you, or the light source is traveling, go try measuring it and you'll find that this is exactly the case.
At this speed, it takes light:
After this it stops making sense to say "a distance x", as the expansion of the universe warps our perception of distance on these immense timescales. Therefore, when you hear radio static, 1% of that is said not to originate from a place, but rather a time, roughly 13.5 billion years ago - the cosmic microwave background from the time of recombination at the dawn of the universe.
TL;DR: The universe is big.
(Photo: pretendy)
A couple of years ago I had the enormous privilege to attend a TEDx lecture by mathematician and cosmologist Roger Penrose. Unfortunately at the time I was only a beginner in physics, and his talk, Before the Big Bang, largely went over my head. What stuck with me though was his incredible hand-drawn slides. Rather than being dry Arial bullet points peppered with the occasional whimsical clip-art, they displayed a great deal of Penrose's effort to put across to the audience what he was trying to say in a visually appealing and stimulating manner.
Here are a couple of slides from another one of his talks, this one on his controversial Orch-OR theory. Him and a colleague, Dr Stuart Hameroff independently came across and later co-developed a theory for brain computation and consciousness. While the neuron has long been assumed to be the fundamental computational unit of the bran - analogous to a transistor in a microchip - Hameroff and Penrose suggest that something much smaller is actually responsible: the microtubule.
Microtubules are self-assembling polymers found in the cytoskeleton of cells and made up of many repeated units of the peanut-shaped protein dimer tubulin. Each can be in one of two internal states denoted by black and white in the picture below. States can propagate along microtubules like cellular automata.
The idea introduced by Penrose is that each dimer is described as being in a quantum superposition of both states (right, below), evoking language from quantum computing, making individual tubulin proteins represent 'qubits' rather than 'bits'.
Hameroff and Penrose express many concerns with the idea that neurons act as the fundamental unit of computation. One such concern is the apparent wealth of cognitive functions available to micro-organisms that lack a nervous system. For example, single cell organisms such as the Paramecium can swim, search out food, learn, remember and procreate, all without the help of neuronal computation.
Penrose and Hameroff cite certain experiments which show that different parts of the brain communicate faster than electro-neurochemistry should allow. Furthermore, Penrose has spent two books trying to prove that certain 'non-computable' (or Gödelean) thought displayed by humans proves that consciousness cannot be explained in terms of the brain being a classical Turing machine.
They claim that quantum computation through microtubule channels neatly alleviates these concerns.
However, for this theory to work, a few alterations must be made to physics... The 'OR' part of Orch-OR stands for 'objective reduction', which is an even more fundamental theory of Penrose's, concerning the nature of quantum physics, general relativity and the nature of spacetime itself, in particular one of the most fundamental aspects of nature: the collapse of the wavefunction.
For Orch-OR and the theory of microtubule computation to hold, a great deal of physics first needs to be revolutionised, and so it is no wonder that Hameroff and Penrose have come under heavy criticism for their theory. Many cite non-predictive and untestable claims made by Orch-OR as well as fundamental flaws in the nature of how quantum states can propagate through matter.
Don't get me wrong: it truly is a fantastic, and imaginative idea, but that doesn't make it correct. It has a long way to come before it's close to being acceptable as a working theory, but what it is successful at is making us question our most fundamental assumptions, which I think can only be a great thing in science.
The 21st century's version of creationism, intelligent design, can be summed up by the Blind Watchmaker argument - that like a watch, whose intricacy implies a watchmaker, life displays such vast complexity that it must be the result of a creator or designer. Intelligent design proponents will say that if you take the parts of a clock, put them in a box, and jumble it around randomly, you will never construct a working clock. Similarly, they say, random mutations in DNA cannot therefore be responsible for the evolution of complex life.
However, this is both a straw man and a false analogy. It's a straw man because it grossly misinterprets the theory of evolution. By drawing a false analogy between DNA replication and jumbling up cogs and springs inside a box, it presents evolution as some kind of 'Randomise' button on an Elder Scrolls character creation menu.
This is not the case.
So anyway this video is pretty cool. It deconstructs the straw man, and instead creates a true (or almost true) analogy between evolution and clock making. It presents a simulation of a large population of clocks, each with its own (initially random) genome, containing all the information about which components connect to which. At random, three clocks are selected and pitted against each other and measured for their ability to tell time. The loser is banished from the population and the winning two mate and create offspring. This process is repeated thousands of thousands of times, and guess what the result is?
3 or 4 handed clocks which tick and tock with the utmost precision.
There are a couple of really interesting things I noted about this video:
It's quite difficult to see whether or not 1 and 2 are fully an artefact of the program - simplifying the full complexity of life down to a few-component system - or whether or not they are an intrinsic (but more subtle) part of real evolution. I think it would be interesting to see if real populations underwent these kinds of phenotypic phase transitions over short periods of time (hundreds of generations).
Our bodies contain roughly ten times as many bacteria, fungi and other microscopic organism than human cells! This interactive guide by Scientific American gives you a brief tour.
Soda on a coin.
Process 6 by Casey Reas
Casey is computational artist and professor at the University of California, Los Angeles. He co-created the Processing programming language, which I've fallen in love with recently. It's a free package with provides a powerful and simple graphical environment for all sorts of graphical simulations, and image generation. It's just generally really fun to play with.
View more of Casey's artistic work here.
This is really cool. Toni Westbrook has created an extensive neural network software package that uses virtual DNA to grow artificial brains. He states:
The ultimate goal is a true, functional model of the biological neural network in software grown using virtual DNA.
In this video his SynthNet program interfaces with a lego robot. He demonstrates its ability to not only interpret sound, but also to use it as a stimulus for associative learning.