Singularity Hub Daily

Singularity Hub

A constant stream of SingularityHub's high-quality articles, read to you via an AI system.

All Episodes

How will the solar system die? It’s a hugely important question that researchers have speculated a lot about, using our knowledge of physics to create complex theoretical models. We know that the sun will eventually become a “white dwarf,” a burnt stellar remnant whose dim light gradually fades into darkness. This transformation will involve a violent process that will destroy an unknown number of its planets. So which planets will survive the death of the sun? One way to seek the answer is to look at the fates of other similar planetary systems. This has proven difficult, however. The feeble radiation from white dwarfs makes it difficult to spot exoplanets (planets around stars other than our sun) which have survived this stellar transformation; they are literally in the dark. In fact, of the over 4,500 exoplanets that are currently known, just a handful have been found around white dwarfs, and the location of these planets suggests they arrived there after the death of the star. This lack of data paints an incomplete picture of our own planetary fate. Fortunately, we are now filling in the gaps. In our new paper, published in Nature, we report the discovery of the first known exoplanet to survive the death of its star without having its orbit altered by other planets moving around, circling a distance comparable to those between the sun and the solar system planets. A Jupiter-Like Planet This new exoplanet, which we discovered with the Keck Observatory in Hawaii, is particularly similar to Jupiter in both mass and orbital separation, and provides us with a crucial snapshot into planetary survivors around dying stars. A star’s transformation into a white dwarf involves a violent phase in which it becomes a bloated “red giant,” also known as a “giant branch” star, hundreds of times bigger than before. We believe that this exoplanet only just survived; if it was initially closer to its parent star, it would have been engulfed by the star’s expansion. When the sun eventually becomes a red giant, its radius will actually reach outwards to Earth’s current orbit. That means the sun will (probably) engulf Mercury and Venus, and possibly the Earth, but we are not sure. Jupiter, and its moons, have been expected to survive, although we previously didn’t know for sure. But with our discovery of this new exoplanet, we can now be more certain that Jupiter really will make it. Moreover, the margin of error in the position of this exoplanet could mean that it is almost half as close to the white dwarf as Jupiter currently is to the sun. If so, that is additional evidence for assuming that Jupiter and Mars will make it. So could any life survive this transformation? A white dwarf could power life on moons or planets that end up being very close to it (about one-tenth the distance between the sun and Mercury) for the first few billion years. After that, there wouldn’t be enough radiation to sustain anything. Asteroids and White Dwarfs Although planets orbiting white dwarfs have been difficult to find, what has been much easier to detect are asteroids breaking up close to the white dwarf’s surface. For exoasteroids to get so close to a white dwarf, they need to have enough momentum imparted to them by surviving exoplanets. Hence, exoasteroids have been long assumed to be evidence that exoplanets are there too. Our discovery finally provides confirmation of this. Although in the system being discussed in the paper, current technology does not allow us to see any exoasteroids, at least now we can piece together different parts of the puzzle of planetary fate by merging the evidence from different white dwarf systems. The link between exoasteroids and exoplanets also applies to our own solar system. Individual objects in the asteroid main belt and Kuiper belt (a disc in the outer solar system) are likely to survive the sun’s demise, but some will be moved by gravity by one of the surviving planets towards the white dwarf’s surface. Future Dis...

Oct 14

6 min 18 sec

Just under a year and a half ago OpenAI announced completion of GPT-3, its natural language processing algorithm that was, at the time, the largest and most complex model of its type. This week, Microsoft and Nvidia introduced a new model they’re calling “the world’s largest and most powerful generative language model.” The Megatron-Turing Natural Language Generation model (MT-NLG) is more than triple the size of GPT-3 at 530 billion parameters. GPT-3’s 175 billion parameters was already a lot; its predecessor, GPT-2, had a mere 1.5 billion parameters, and Microsoft’s Turing Natural Language Generation model, released in February 2020, had 17 billion. A parameter is an attribute a machine learning model defines based on its training data, and tuning more of them requires upping the amount of data the model is trained on. It’s essentially learning to predict how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on other words in the sentence. As you can imagine, getting to 530 billion parameters required quite a lot of input data and just as much computing power. The algorithm was trained using an Nvidia supercomputer made up of 560 servers, each holding eight 80-gigabyte GPUs. That’s 4,480 GPUs total, and an estimated cost of over $85 million. For training data, Megatron-Turing’s creators used The Pile, a dataset put together by open-source language model research group Eleuther AI. Comprised of everything from PubMed to Wikipedia to Github, the dataset totals 825GB, broken down into 22 smaller datasets. Microsoft and Nvidia curated the dataset, selecting subsets they found to be “of the highest relative quality.” They added data from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. GPT-3 was also trained using Common Crawl data. Microsoft’s blog post on Megatron-Turing says the algorithm is skilled at tasks like completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. But stay tuned—there will likely be more skills added to that list once the model starts being widely utilized. GPT-3 turned out to have capabilities beyond what its creators anticipated, like writing code, doing math, translating between languages, and autocompleting images (oh, and writing a short film with a twist ending). This led some to speculate that GPT-3 might be the gateway to artificial general intelligence. But the algorithm’s variety of talents, while unexpected, still fell within the language domain (including programming languages), so that’s a bit of a stretch. However, given the tricks GPT-3 had up its sleeve based on its 175 billion parameters, it’s intriguing to wonder what the Megatron-Turing model may surprise us with at 530 billion. The algorithm likely won’t be commercially available for some time, so it’ll be a while before we find out. The new model’s creators, though, are highly optimistic. “We look forward to how MT-NLG will shape tomorrow’s products and motivate the community to push the boundaries of natural language processing even further,” they wrote in the blog post. “The journey is long and far from complete, but we are excited by what is possible and what lies ahead.” Image Credit: Kranich17 from Pixabay

Oct 13

4 min

Sarah hadn’t laughed in five years. At 36 years old, the avid home cook has struggled with depression since early childhood. She tried the whole range of antidepressant medications and therapy for decades. Nothing worked. One night, five years ago, driving home from work, she had one thought in her mind: this is it. I’m done. Luckily she made it home safe. And soon she was offered an intriguing new possibility to tackle her symptoms—a little chip, implanted into her brain, that captures the unique neural signals encoding her depression. Once the implant detects those signals, it zaps them away with a brief electrical jolt, like adding noise to an enemy’s digital transmissions to scramble their original message. When that message triggers depression, hijacking neural communications is exactly what we want to do. Flash forward several years, and Sarah has her depression under control for the first time in her life. Her suicidal thoughts evaporated. After quitting her tech job due to her condition, she’s now back on her feet, enrolled in data analytics classes and taking care of her elderly mother. “For the first time,” she said, “I’m finally laughing.” Sarah’s recovery is just one case. But it signifies a new era for the technology underlying her stunning improvement. It’s one of the first cases in which a personalized “brain pacemaker” can stealthily tap into, decipher, and alter a person’s mood and introspection based on their own unique electrical brain signatures. And while those implants have achieved stunning medical miracles in other areas—such as allowing people with paralysis to walk again—Sarah’s recovery is some of the strongest evidence yet that a computer chip, in a brain, powered by AI, can fundamentally alter our perception of life. It’s the closest to reading and repairing a troubled mind that we’ve ever gotten. “We haven’t been able to do this kind of personalized therapy previously in psychiatry,” said study lead Dr. Katherine Scangos at UCSF. “This success in itself is an incredible advancement in our knowledge of the brain function that underlies mental illness.” Brain Pacemaker The key to Sarah’s recovery is a brain-machine interface. Roughly the size of a matchbox, the implant sits inside the brain, silently listening to and decoding its electrical signals. Using those signals, it’s possible to control other parts of the brain or body. Brain implants have given people with lower body paralysis the ability to walk again. They’ve allowed amputees to control robotic hands with just a thought. They’ve opened up a world of sensations, integrating feedback from cyborg-like artificial limbs that transmit signals directly into the brain. But Sarah’s implant is different. Sensation and movement are generally controlled by relatively well-defined circuits in the outermost layer of the brain: the cortex. Emotion and mood are also products of our brain’s electrical signals, but they tend to stem from deeper neural networks hidden at the center of the brain. One way to tap into those circuits is called deep brain stimulation (DBS), a method pioneered in the ’80s that’s been used to treat severe Parkinson’s disease and epilepsy, particularly for cases that don’t usually respond to medication. Sarah’s neural implant takes this route: it listens in on the chatter between neurons deep within the brain to decode mood. But where is mood in the brain? One particular problem, the authors explained, is that unlike movement, there is no “depression brain region.” Rather, emotions are regulated by intricate, intertwining networks across multiple brain regions. Adding to that complexity is the fact that we’re all neural snowflakes—each of us have uniquely personalized brain network connections. In other words, zapping my circuit to reduce depression might not work for you. DBS, for example, has previously been studied for treating depression. But despite decades of research, it’s not federally approved due to inconsistent result...

Oct 12

8 min 14 sec

Computer chips that recreate the brain’s structure in silicon are a promising avenue for powering the smart robots of the future. Now Intel has released an updated version of its Loihi neuromorphic chip, which it hopes will bring that dream closer. Despite frequent comparisons, the neural networks that power today’s leading AI systems operate very differently than the brain. While the “neurons” used in deep learning shuttle numbers back and forth between one another, biological neurons communicate in spikes of electrical activity whose meaning is tied up in their timing. That is a very different language from the one spoken by modern processors, and it’s been hard to efficiently implement these kinds of spiking neurons on conventional chips. To get around this roadblock, so-called “neuromorphic” engineers build chips that mimic the architecture of biological neural networks to make running these spiking networks easier. The field has been around for a while, but in recent years it’s piqued the interest of major technology companies like Intel, IBM, and Samsung. Spiking neural networks (SNNs) are considerably less developed than the deep learning algorithms that dominate modern AI research. But they have the potential to be far faster and more energy-efficient, which makes them promising for running AI on power-constrained edge devices like smartphones or robots. Intel entered the fray in 2017 with its Loihi neuromorphic chip, which could emulate 125,000 spiking neurons. But now the company has released a major update that can implement one million neurons and is ten times faster than its predecessor. “Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency constrained intelligent computing applications,” Mike Davies, director of Intel’s Neuromorphic Computing Lab, said in a statement. Loihi 2 doesn’t only significantly boost the number of neurons, it greatly expands their functionality. As outlined by IEEE Spectrum, the new chip is much more programmable, allowing it to implement a wide range of SNNs rather than the single type of model the previous chip was capable of. It’s also capable of supporting a wider variety of learning rules that should, among other things, make it more compatible with the kind of backpropagation-based training approaches used in deep learning. Faster circuits also mean the chip can now run at 5,000 times the speed of biological neurons, and improved chip interfaces make it easier to get several of them working in concert. Perhaps the most significant changes, though, are to the neurons themselves. Each neuron can run its own program, making it possible to implement a variety of different kinds of neurons. And the chip’s designers have taken it upon themselves to improve on Mother Nature’s designs by allowing the neurons to communicate using both spike timing and strength. The company doesn’t appear to have any plans to commercialize the chips, though, and for the time being they will only be available over the cloud to members of the Intel Neuromorphic Research Community. But the company does seem intent on building up the neuromorphic ecosystem. Alongside the new chip, it has also released a new open-source software framework called LAVA to help researchers build “neuro-inspired” applications that can run on any kind of neuromorphic hardware or even conventional processors. “LAVA is meant to help get neuromorphic [programming] to spread to the wider computer science community,” Davies told Ars Technica. That will be a crucial step if the company ever wants its neuromorphic chips to be anything more than a novelty for researchers. But given the broad range of applications for the kind of fast, low-power intelligence they could one day provide, it seems like a sound investment. Image Credit: Intel

Oct 11

4 min 4 sec

It’s often said Earth’s resources are finite. This is true enough. But shift your gaze skyward for a moment. Up there, amid the stars, lurks an invisible bonanza of epic proportions. Many of the materials upon which modern civilization is built exist in far greater amounts throughout the rest of the solar system. Earth, after all, was formed from the same cosmic cloud as all the other planets, comets, and asteroids—and it hardly cornered the market when it comes to the valuable materials we use to make smartphone batteries or raise skyscrapers. A recent study puts it in perspective. Lead author Juan Sanchez and a team of scientists analyzed the spectrum of asteroid 1986 DA, a member of a rare class of metal-rich, near-Earth asteroids. They found the surface of this particular space rock to be 85% metallic, likely including iron, nickel, cobalt, copper, gold, and platinum group metals prized for industrial uses, from cars to electronics. With the exception of gold and copper, they estimate the mass of these metals would exceed their global reserves on Earth—in some cases by an order of magnitude (or more). The team also put a dollar figure on the asteroid’s economic value. If mined and marketed over a period of 50 years, 1986 DA’s precious metals would bring in some $233 billion a year for a total haul of $11.65 trillion. (That takes into account the deflationary effect the flood of new supply would have on the market.) It probably wouldn’t make sense to bring home metals like iron, nickel, and cobalt, which are common on Earth, but they could be used to build infrastructure in orbit and on the moon and Mars. In short, mining one nearby asteroid could yield a precious metals jackpot. And there are greater prizes lurking further afield in the asteroid belt. Of course, asteroid mining is hardly a new idea. The challenging (and expensive) parts are traveling to said asteroids, stripping them of their precious ore, and shipping it out. But before we even get to the hard parts, we need to prospect the claim. This study, combined with future NASA missions to the asteroid belt, should help bring the true extent of space resources into sharper focus. The Priceless Cores of Dead Protoplanets What makes 1986 DA particularly interesting is its proximity to Earth. Most metal-rich asteroids live way out in the asteroid belt, between Mars and Jupiter. Famous among these is 16 Psyche, a hulking, 140-mile-wide asteroid first discovered in 1852. The asteroid belt was once thought to be the remnants of a planet, but its origins are less certain now. Still, scientists speculate Psyche may be the exposed core of a shattered planet-in-the-making. And indeed, smaller metal-rich asteroids may also be the shards of a protoplanetary core. Under this theory, developing planets in the asteroid belt grew large enough to differentiate rocky mantles and metal cores. These later suffered a series of collisions, leaving their shattered rocky remains and broken metal hearts to wander the belt. We may never observe Earth’s core in person, so, if the theory is true, Psyche could be our next best alternative. Also, the existence of so much exposed metal in one place is tantalizing for those who would extend humanity’s presence beyond Earth. In either case, we have yet only managed to assemble a basic portrait of Psyche. It’s simply too far away to study in any great detail. Which is where 1986 DA and 2016 ED85 (another asteroid in the study) come in. Keeping Up With the Joneses Both 1986 DA and 2016 ED85 are classified as near-Earth asteroids. That is, they live in our neighborhood. At some point in the past, gravitational interactions with Jupiter nudged them out of the asteroid belt and into near-Earth orbits. So, a key motivation of the study was to trace the asteroids’ lineage. Because they’re closer, we can observe them in more detail and infer the characteristics of their distant family members, including Psyche. According to the study, spectral analysis...

Oct 10

7 min 30 sec

Most animals are limited to either walking, flying, or swimming, with a handful of lucky species whose physiology allows them to cross over. A new robot took inspiration from them, and can fly like a bird just as well as it can walk like a (weirdly awkward, metallic, tiny) person. It also happens to be able to skateboard and slackline, two skills most humans will never pick up. Described in a paper published this week in Science Robotics, the robot’s name is Leo, which is short for Leonardo, which is short for LEgs ONboARD drOne. The name makes it sound like a drone with legs, but it has a somewhat humanoid shape, with multi-joint legs, propeller thrusters that look like arms, a “body” that contains its motors and electronics, and a dome-shaped protection helmet. Leo was built by a team at Caltech, and they were particularly interested in how the robot would transition between walking and flying. The team notes that they studied the way birds use their legs to generate thrust when they take off, and applied similar principles to the robot. In a video that shows Leo approaching a staircase, taking off, and gliding over the stairs to land near the bottom, the robot’s motions are seamlessly graceful. “There is a similarity between how a human wearing a jet suit controls their legs and feet when landing or taking off and how LEO uses synchronized control of distributed propeller-based thrusters and leg joints,” said Soon-Jo Chung, one of the paper’s authors a professor at Caltech. “We wanted to study the interface of walking and flying from the dynamics and control standpoint.” Leo walks at a speed of 20 centimeters (7.87 inches) per second, but can move faster by mixing in some flying with the walking. How wide our steps are, where we place our feet, and where our torsos are in relation to our legs all help us balance when we walk. The robot uses its propellers to help it balance, while its leg actuators move it forward. To teach the robot to slackline—which is much harder than walking on a balance beam—the team overrode its feet contact sensors with a fixed virtual foot contact centered just underneath it, because the sensors weren’t able to detect the line. The propellers played a big part as well, helping keep Leo upright and balanced. For the robot to ride a skateboard, the team broke the process down into two distinct components: controlling the steering angle and controlling the skateboard’s acceleration and deceleration. Placing Leo’s legs in specific spots on the board made it tilt to enable steering, and forward acceleration was achieved by moving the bot’s center of mass backward while pitching the body forward at the same time. So besides being cool (and a little creepy), what’s the goal of developing a robot like Leo? The paper authors see robots like Leo enabling a range of robotic missions that couldn’t be carried out by ground or aerial robots. “Perhaps the most well-suited applications for Leo would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and call for a substitution by robotic workers,” the paper’s authors said. Examples could include high-voltage line inspection, painting tall bridges or other high-up surfaces, inspecting building roofs or oil refinery pipes, or landing sensitive equipment on an extraterrestrial object. Next up for Leo is an upgrade to its performance via a more rigid leg design, which will help support the robot’s weight and increase the thrust force of its propellers. The team also wants to make Leo more autonomous, and plans to add a drone landing control algorithm to its software, ultimately aiming for the robot to be able to decide where and when to walk versus fly. Leo hasn’t quite achieved the wow factor of Boston Dynamics’ dancing robots (or its Atlas that can do parkour), but it’s on its way. Image Credit: Caltech Center for Autonomous Systems and Technologies/Science Robotics

Oct 8

4 min 9 sec

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches. Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach. Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven’s vision will come to life. I presided over the artificial intelligence side of the project, leading a group of scientists at the creative AI startup Playform AI that taught a machine both Beethoven’s entire body of work and his creative process. A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany—the culmination of a two-year-plus effort. Past Attempts Hit a Wall Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo. Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.” But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down. There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision. Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement. Assembling the Team In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven. The challenge seemed daunting. To pull it off, AI would need to do something it had never done before. But I said I would give it a shot. Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained. The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach. The Project Takes Shape In June 2019, the group gathered for a two-day workshop at Harvard’s music library. In a large room with a piano, a blackboard and a stack of Beethoven’s sketchbooks spanning most of his known works, we talked about how fragments could be turned into a complete piece of music and how AI could help solve this puzzle, while still remaining faithful to Beethoven’s process and vision. The music experts in the room were eager to learn more about the sort of music AI had created in the past. I told them how AI had successfully generated music in the style of Bach. However, this was only a harm...

Oct 7

10 min 2 sec

In March of this year, a quarter-mile-wide asteroid flew through space at a speed of 77,000 miles per hour. It was five times farther from Earth than the moon, but that’s actually considered pretty close when the context is the whole Milky Way galaxy. There’s not a huge risk of an asteroid hitting Earth anytime in the foreseeable future. But NASA wants to be ready, just in case. In April the space agency led a simulated asteroid impact scenario, testing how well federal agencies, international space agencies, and other decision-makers, scientific institutions, and emergency managers could work together to avert catastrophe. Now another asteroid-deflecting initiative is underway, but this time, it’s getting much more real. There’s still no danger of anything colliding with Earth or threatening human lives. But NASA’s DART mission plans to purposely crash a spacecraft into an asteroid to try to alter its path. DART stands for Double Asteroid Redirection Test, and NASA has just set its launch date for November 23 at 10:20 pm Pacific time. The spacecraft will launch on a SpaceX Falcon 9 rocket from Vandenberg Air Force Base, located near the California coast about 160 miles north of Los Angeles. From there, it will travel to an asteroid called Didymos, taking about a year to arrive (it’s seven million miles away) and using roll-out solar arrays to power its electric propulsion system. Didymos is 2,560 feet wide and completes a rotation every 2.26 hours. It has a secondary body, or moonlet, named Dimorphos that’s 525 feet wide. The two bodies are just over half a mile apart, and the moonlet revolves about the primary once every 11.9 hours. Using an onboard camera and autonomous navigation software, the spacecraft will crash itself into the moonlet at a speed of almost 15,000 miles per hour. NASA estimates that the collision will change Dimorphos’ speed in its orbit around Didymos by just a fraction of one percent, but that’s enough to alter Dimorphos’ orbital period by several minutes, enough to be observed and measured from telescopes on Earth. NASA plans to capture the whole thing on video. Ten days before DART’s asteroid impact, the agency will launch a miniaturized satellite, called LICIACube, equipped with two optical cameras. The goal will be for the cubesat to fly past Dimorphos around three minutes after DART hits its moonlet, allowing the cameras to capture images of the impact’s effects. And that’s not all the observation the mission will get. The European Space Agency plans to launch its Hera spacecraft (named for the Greek goddess of marriage!) in 2024 to see the effects of DART up close and in detail. As the agency notes in its description of the Hera mission, “By the time Hera reaches Didymos, in 2026, Dimorphos will have achieved historic significance: the first object in the solar system to have its orbit shifted by human effort in a measurable way.” You can watch NASA’s coverage of the DART launch on NASA TV via the agency’s app and website. Image Credit: NASA

Oct 6

3 min 26 sec

Understanding cancer is like assembling IKEA furniture. Hear me out. Both start with individual pieces that make up the final product. For a cabinet, it’s a list of labeled precut plywood. For cancer, it’s a ledger of genes that—through the Human Genome Project and subsequent studies—we know are somehow involved in cells mutating, spreading, and eventually killing their host. Yet without instructions, pieces of wood can’t be assembled into a cabinet. And without knowing how cancer-related genes piece together, we can’t decipher how they synergize to create one of our fiercest medical foes. It’s like we have the first page of an IKEA manual, said Dr. Trey Ideker at UC San Diego. But “how these genes and gene products, the proteins, are tied together is the rest of the manual—except there’s about a million pages worth of it. You need to understand those pages if you’re really going to understand disease.” Ideker’s comment, made in 2017, was strikingly prescient. The underlying idea is seemingly simple, yet a wild shift from previous attempts at cancer research: rather than individual genes, let’s turn the spotlight on how they fit together into networks to drive cancer. Together with Dr. Nevan Krogan at UC San Francisco, a team launched the Cancer Cell Map Initiative (CCMI), a moonshot that peeks into the molecular “phone lines” within cancer cells that guide their growth and spread. Snip them off, the theory goes, and it’s possible to nip tumors in the bud. This week, three studies in Science led by Ideker and Krogan showcased the power of that radical change in perspective. At its heart is protein-protein interactions: that is, how the cell’s molecular “phone lines” rewire and fit together as they turn to the cancerous dark side. One study mapped the landscape of protein networks to see how individual genes and their protein products coalesce to drive breast cancer. Another traced the intricate web of genetic connections that promote head and neck cancer. Tying everything together, the third study generated an atlas of protein networks involved in various types of cancer. By looking at connections, the map revealed new mutations that likely give cancer a boost, while also pointing out potential weaknesses ripe for target-and-destroy. For now, the studies aren’t yet a comprehensive IKEA-like manual of how cancer components fit together. But they’re the first victories in a sweeping framework for rethinking cancer. “For many cancers, there is an extensive catalog of genetic mutations, but a consolidated map that organizes these mutations into pathways that drive tumor growth is missing,” said Drs. Ran Cheng and Peter Jackson at Stanford University, who weren’t involved in the studies. Knowing how those work “will simplify our search for effective cancer therapies.” Cellular Chatterbox Every cell is an intricate city, with energy, communications systems, and waste disposal needs. Their secret sauce for everything humming along nicely? Proteins. Proteins are indispensable workhorses with many tasks and even more identities. Some are builders, tirelessly laying down “railway” tracks to connect different parts of a cell; others are carriers, hauling cargo down those protein rails. Enzymes allow cells to generate energy and perform hundreds of other life-sustaining biochemical reactions. But perhaps the most enigmatic proteins are the messengers. These are often small in size, allowing them to zip around the cell and between different compartments. If a cell is a neighborhood, these proteins are mailmen, shuttling messages back and forth. Rather than dropping off mail, however, they deliver messages by physically tagging onto other protein. These “handshakes” are dubbed protein-protein interactions (PPIs), and are critical to a cell’s function. PPIs are basically the cell’s supply chain, communications cable, and energy economy rolled into one massive infrastructure. Destroying just one PPI can lead a thriving cell to die. PPIs ar...

Oct 5

8 min 55 sec

Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world. Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers. More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI. Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design. “Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.” At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems. This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer. This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place. This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered. Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using. Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium. It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies. The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more. But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field. Image Credit: Pete Linfor...

Oct 4

4 min 5 sec

With the right computer program, proteins become pleasant music. There are many surprising analogies between proteins, the basic building blocks of life, and musical notation. These analogies can be used not only to help advance research, but also to make the complexity of proteins accessible to the public. We’re computational biologists who believe that hearing the sound of life at the molecular level could help inspire people to learn more about biology and the computational sciences. While creating music based on proteins isn’t new, different musical styles and composition algorithms had yet to be explored. So we led a team of high school students and other scholars to figure out how to create classical music from proteins. The Musical Analogies of Proteins Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet. A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation. Protein chains can also fold into wavy and curved patterns with ups, downs, turns, and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs. Protein-to-music algorithms can thus map the structural and physiochemical features of a string of amino acids onto the musical features of a string of notes. Enhancing the Musicality of Protein Mapping Protein-to-music mapping can be fine-tuned by basing it on the features of a specific music style. This enhances musicality, or the melodiousness of the song, when converting amino acid properties, such as sequence patterns and variations, into analogous musical properties, like pitch, note lengths, and chords. For our study, we specifically selected 19th-century Romantic period classical piano music, which includes composers like Chopin and Schubert, as a guide because it typically spans a wide range of notes with more complex features such as chromaticism, like playing both white and black keys on a piano in order of pitch, and chords. Music from this period also tends to have lighter and more graceful and emotive melodies. Songs are usually homophonic, meaning they follow a central melody with accompaniment. These features allowed us to test out a greater range of notes in our protein-to-music mapping algorithm. In this case, we chose to analyze features of Chopin’s Fantaisie-Impromptu to guide our development of the program. To test the algorithm, we applied it to 18 proteins that play a key role in various biological functions. Each amino acid in the protein is mapped to a particular note based on how frequently they appear in the protein, and other aspects of their biochemistry correspond with other aspects of the music. A larger-sized amino acid, for instance, would have a shorter note length, and vice versa. The resulting music is complex, with notable variations in pitch, loudness, and rhythm. Because the algorithm was completely based on the amino acid sequence and no two proteins share the same amino acid sequence, each protein will produce a distinct song. This also means that there are variations in musicality across the different pieces, and interesting patterns can emerge. For example, music generated from the receptor protein that binds to the hormone and neurotransmitter oxytocin has some recurring motifs due to the repetition of certain small sequences of amino acids. On the other hand, music generated from tumor antigen p53, a protein that prevents cancer formation, is highly chromatic, producing particularly fascinating phrases where the music sounds almost toccata-like, a style that often features fast and virtuoso technique. By guiding analysis of amino acid properties through specific music styles, protein music can sound much more pleasant to the ear. This can be further developed and applied to a wider variety of music styles, including pop and jazz. P...

Oct 3

4 min 56 sec

The TV show Star Trek: The Next Generation introduced millions of people to the idea of a holodeck: an immersive, realistic 3D holographic projection of a complete environment that you could interact with and even touch. In the 21st century, holograms are already being used in a variety of ways, such as medical systems, education, art, security and defense. Scientists are still developing ways to use lasers, modern digital processors, and motion-sensing technologies to create several different types of holograms that could change the way we interact. My colleagues and I working in the University of Glasgow’s bendable electronics and sensing technologies research group have now developed a system of holograms of people using “aerohaptics,” creating feelings of touch with jets of air. Those jets of air deliver a sensation of touch on peoples’ fingers, hands, and wrists. In time, this could be developed to allow you to meet a virtual avatar of a colleague on the other side of the world and really feel their handshake. It could even be the first step towards building something like a holodeck. To create this feeling of touch we use affordable, commercially available parts to pair computer-generated graphics with carefully-directed and controlled jets of air. In some ways, it’s a step beyond the current generation of virtual reality, which usually requires a headset to deliver 3D graphics and smart gloves or handheld controllers to provide haptic feedback, a stimulation that feels like touch. Most of the wearable gadgets-based approaches are limited to controlling the virtual object that is being displayed. Controlling a virtual object doesn’t give the feeling that you would experience when two people touch. The addition of an artificial touch sensation can deliver the additional dimension without having to wear gloves to feel objects, and so feels much more natural. Using Glass and Mirrors Our research uses graphics that provide the illusion of a 3D virtual image. It’s a modern variation on a 19th-century illusion technique known as Pepper’s Ghost, which thrilled Victorian theatergoers with visions of the supernatural onstage. The systems uses glass and mirrors to make a two-dimensional image appear to hover in space without the need for any additional equipment. And our haptic feedback is created with nothing but air. The mirrors making up our system are arranged in a pyramid shape with one open side. Users put their hands through the open side and interact with computer-generated objects which appear to be floating in free space inside the pyramid. The objects are graphics created and controlled by a software program called Unity Game Engine, which is often used to create 3D objects and worlds in videogames. Located just below the pyramid is a sensor that tracks the movements of users’ hands and fingers, and a single air nozzle, which directs jets of air towards them to create complex sensations of touch. The overall system is directed by electronic hardware programmed to control nozzle movements. We developed an algorithm which allowed the air nozzle to respond to the movements of users’ hands with appropriate combinations of direction and force. One of the ways we’ve demonstrated the capabilities of the “aerohaptic” system is with an interactive projection of a basketball, which can be convincingly touched, rolled, and bounced. The touch feedback from air jets from the system is also modulated based on the virtual surface of the basketball, allowing users to feel the rounded shape of the ball as it rolls from their fingertips when they bounce it and the slap in their palm when it returns. Users can even push the virtual ball with varying force and sense the resulting difference in how a hard bounce or a soft bounce feels in their palm. Even something as apparently simple as bouncing a basketball required us to work hard to model the physics of the action and how we could replicate that familiar sensation with jets of air. S...

Oct 1

5 min 28 sec

Climate change is wreaking havoc on land via extreme weather events like wildfires, hurricanes, floods, and record-high temperatures. Glaciers are melting and sea levels are rising. And of course, the ocean isn’t immune to all this upheaval; our seas are suffering rising water temperatures, pollution from plastics and chemicals, overfishing, and more. A British startup is tackling one vitally important component of ocean damage: restoring coral reefs, and in the process, protecting the coastlines they sit on and fostering marine ecosystems within and around them. Ccell was founded in 2015 by Will Bateman, a civil and environmental engineer whose doctorate at Imperial College London involved studying the directional effects of extreme ocean waves. Bateman applied that research in founding the company, which uses an “ultra-light curved paddle” to harness energy from waves, combining this energy with an electrolytic technique to grow artificial reefs. Here’s how it works. A structure made of steel is immersed in the sea—a modular design with units 2.5 meters (8.2 feet) long and up to 2 meters (6.5 feet) high means the reefs can be customized for different areas—then low-voltage electrical currents produced by wave energy pass between the steel and a metal anode. This produces oxygen at the anode and causes the pH to rise at the cathode (the steel), causing the dissolved salts that naturally exist in seawater to calcify onto the steel and turn to rock. It’s a slow process—the rock grows at a rate of about 2.5 centimeters (1 inch) per year—but Ccell claims the method accelerates coral growth, enabling fragments of broken or farmed corals to grow faster than they would on natural reefs. The reefs are considered “hybrid” because they’re not fully natural, but once they’ve been in the water for a while, they essentially act as a substrate on which many components of a natural reef can thrive. Besides housing thriving ecosystems of marine life that include everything from coral to fish, lobsters, clams, and sea turtles, reefs also help protect the shorelines they’re near by breaking down waves. While large waves tend to be destructive, small waves can actually re-deposit sand on the beach and help preserve it. Because Ccell’s reefs are porous, they induce turbulence in waves and further reduce their force before they reach the shore. Beaches in areas that draw tourists are particularly interested in keeping their sand. Ccell installed its first reef substrate over the summer at Telchac Puerto, a resort near the city of Mérida on Mexico’s Yucatan peninsula. If the reef succeeds at protecting the shoreline and fostering a healthy marine ecosystem, Ccell will likely be installing many more like it in the near future. Artificial reefs aren’t a new idea. One similar to Ccell’s was installed in Sydney Harbor in 2019, and around the world are reefs made from decommissioned oil rigs, aircraft carriers, or ships. What sets Ccell apart is the electrolysis that helps rock form (which is based on a technology called Biorock that the Global Coral Reef Alliance has been using since 1996), and the fact that it’s now going commercial. It’s not just beach resorts that are taking note of hybrid reef technology. DARPA’s Reefense project is looking to hybrid reefs to “mitigate the coastal flooding, erosion, and storm damage that increasingly threaten civilian and Department of Defense infrastructure and personnel.” Crowdcube, a British investment crowdfunding platform that led Ccell’s seed funding, estimated a global market of £50 billion ($67 billion) for hybrid reefs, noting that Quintana Roo—the Mexican state adjacent to Yucatan, where Cancun and other popular resorts are located—spent around £7.7 million ($10.3 million) per mile to add sand to its beaches, and 6 to 8 percent of that washed away within a year. A more cost-effective, long-term solution is in order. Ccell appears to be on the right track, but at a rate of one inch of rock growth per y...

Sep 30

4 min 27 sec

Screens are taking over our lives. According to market research firm eMarketer, in 2020 adults in the US spent an average of 7 hours and 50 minutes per day looking at screens. That total is likely much higher for desk workers, who look at their computers during the work day then look at their phones or TVs in the evening. Screen time is bad enough for adults, but what about kids? Video games, social media, show streaming, and messaging have all become common activities not just for teens, but for children too, and the impacts often aren’t positive. Two weeks ago, for example, the Wall Street Journal broke the story that Facebook has downplayed findings from its own research on the ill effects of its platforms (namely Instagram) on teenage girls. Rates of depression, anxiety, and eating disorders among adolescents and adults are on the rise. In the US, it’s mostly up to parents to restrict or control their kids’ screen time and social media usage. But the degree to which parents try to limit these activities (and succeed at doing so) varies widely. In China, it’s a different story. Forget parents—the government has taken matters into its own hands and is seeing to it that kids don’t while away their time (and their young developing brains) on worthless screen-centered activities. Out of Time At the end of August, China’s National Press and Publication Administration implemented new rules restricting the amount of time that minors (defined here as under age 18) can spend playing video games, slashing the limit to one hour per day on weekends and holidays. The previous limit, set in 2019, was 3 hours on holidays and 1.5 hours on other days. Two weeks ago, ByteDance Ltd., which owns TikTok and its Chinese version Douyin, followed suit, implementing restrictions for users under 14. The app’s new “youth mode” allows kids and teens to be on the platform for up to 40 minutes a day total, and only between the hours of 6am and 10pm. “Adolescents are the future of the motherland, and protecting the physical and mental health of minors is related to the vital interests of masses, and in cultivating newcomers in the era of national rejuvenation,” the Press and Publications Administration said in a statement. In other words, the youth are the future, and if we let screens and social media turn their brains to mush while they’re young, the future’s not going to be very bright. Screen Time and Geopolitics? The restrictions come amid growing geopolitical tensions between China and the US, and crackdowns by the Chinese government over various sectors of the economy, from big tech to education to ride-hailing and real estate. Limiting kids’ screen time may not appear to be connected to China’s geopolitical ambitions, but considering the longer-term implications of these policies says otherwise. All else being equal, which country is more likely to produce a generation of great leaders, innovators, scientists, businesspeople, creatives, and the like: one where clear rules (and a cultural stigma) around screens force kids to spend time on more productive activities and curb the negative effects of screens on their mental health—or one where kids spend hours each day immersed in virtual worlds, distracting them from real-life activities and wearing down their self-esteem, focus, and social skills in the process? Of course, not all else is equal between China and the US. Though both nations are global powerhouses, they’re worlds apart in terms of culture, government, education systems, and social norms, to name just a few. It’s not unreasonable to think that government restrictions on kids’ screen time could help make China’s next generation more capable than America’s. But for one, it’s uncertain how strictly the time limits will be enforced. Douyin and gaming platforms will require name and age verification, and some gaming platforms will do periodic facial recognition checks on players. Tao Ran, who directs Beijing’s Adolescent Psychological D...

Sep 29

6 min 16 sec

If the Human Genome Project (HGP) was an actual human, he or she would be a revolutionary whiz kid. A prodigy in the vein of Mozart. One who changed the biomedical universe forever as a teenager, but ultimately has much more to offer in the way of transforming mankind. It’s been 20 years since scientists published the first draft of the human genome. Since its launch in the 90s, the HGP fundamentally altered how we understand our genetic blueprint, our evolution, and the diagnosis and treatment of diseases. It spawned famous offspring, including gene therapy, mRNA vaccines, and CRISPR. It’s the parent to HGP-Write, a global consortium that seeks to rewrite life. Yet as genome sequencing costs and time continue to dive, the question remains: what have we actually learned from the HGP? After two decades, is it becoming obsolete, with a new generation of genomic data in the making? And with controversial uses such as designer babies, human-animal chimeras, organs-in-a-tube, and shaky genetic privacy, how is the legacy of the HGP guiding the future of humanity? In a special issue of Science, scientists across the globe took a deep dive into the lessons learned from the world’s first biomedical moonshot. “Although some hoped having the human genome in hand would let us sprint to medical miracles, the field is more an ongoing relay race of contributions from genomic studies,” wrote Science senior editor Laura Zahn. Decoding, reworking, and potentially one day augmenting the human genome is an ultramarathon, buoyed by potential medical miracles and fraught with possible abuses. “As genomic data and its uses continue to balloon, it will be critical to curb potential abuse and ensure that the legacy of the HGP contributes to the betterment of all human lives,” wrote Drs. Jennifer Rood and Aviv Regev at Genentech in a perspectives article for the issue. An Apollo Program to Decode Life Big data projects are a dime a dozen these days. A global effort to solve the brain? Yup. Scouring centenarians’ genes to find those that lead to longevity? Sure! Spitting in a tube to find out your ancestry and potential disease risks—the kits are on sale for the holidays! Genetically engineering anything—from yeast that brew insulin to an organism entirely new to Earth—been there, done that! These massive international collaborations and sci-fi stretch goals that we now take for granted owe their success to the HGP. It’s had a “profound effect on biomedical research,” said Rood and Regev. Flashback to the 1990s. Pulp Fiction played in theaters, Michael Jordan owned the NBA, and an international team decided to crack the base code of human life. The study arose from years of frustration that genetic mapping tools needed better resolution. Scientists could roughly track down a gene related to certain types of genetic disorders, like Huntington’s disease, which is due to a single gene mutation. But it soon became clear that most of our toughest medical foes, such as cancer, often have multiple genetic hiccups. With the tools that were available at the time, solving these disorders was similar to debugging thousands of lines of code through a fogged-up lens. Ultimately, the pioneers realized we needed an “infinitely dense” map of the genome to really begin decoding, said the authors. Meaning, we needed a whole picture of the human genome, at high resolution, and the tools to get it. Before the HGP, we were peeking at our genome through consumer binoculars. After it, we got the James Webb space telescope to look into our inner genetic universe. The result was a human “reference genome,” a mold that nearly all biomedical studies map onto, from synthetic biology to chasing disease-causing mutants to the creation of CRISPR. Massive global consortiums, including the 1000 Genomes Project, the Cancer Genome Atlas, the BRAIN Initiative, and the Human Cell Atlas have all followed in HGP’s steps. As a first big data approach to medicine, before the internet was ub...

Sep 28

8 min 17 sec

The brain is the center of every human being’s world, but many of its inner workings are yet mysterious. Slowly, scientists are pulling back the veil. Recently, for example, researchers have created increasingly intricate maps of the brain’s connections. These maps, called connectomes, detail every cell and synapse in small areas of the brain—but the maps are static. That is, we can’t watch the cellular circuits they trace in action as an animal encounters the world and information courses through its neural connections. Most of the methods scientists use to watch the brain in action offer either low resolution and wide coverage or high resolution and narrow coverage. A new technique, developed by researchers at The Rockefeller University and recently published in the journal Nature Methods, is the best of both worlds. Called light beads microscopy, the team was able to record hundreds of thousand of neurons in 3D volumes through time. In a striking example, they released a movie of a million neurons firing in a mouse brain as it went about its day. Typically, neuroscientists use a technique called two-photon microscopy to record neurons as they fire. Laser pulses are sent into the brain where they interact with fluorescent tags and cause them to light up. Scientist then interpret the light to infer activity. Two-photon microscopy can record small bands of neurons in action, but struggles for bigger groups. The light beads technique builds on two-photon microscopy, with a clever tweak. Instead of relying on single pulses too slow to record broad populations of neurons firing, it divides each pulse into 30 sub-pulses of varying strengths. A series of mirrors sends these sub-pulses into the brain at 30 different depths, recording the behavior of neurons at each depth almost simultaneously. The technique is so speedy that its only limitation is how quickly the fluorescent tags respond to the pulses of light. To test it, the team outfitted a microscopy platform—essentially a lightweight microscope that can be attached to a mouse’s head to record brain activity as it moves about—with the new light beads functionality and put it to work. They were able to capture hundreds of thousands of neurons signaling to each other from across the cortex. Even better? Because light beads builds on already-widely-used two-photon microscopy, labs should already have or be able to readily procure the needed equipment. “Understanding the nature of the brain’s densely interconnected network requires developing novel imaging techniques that can capture the activity of neurons across vastly separated brain regions at high speed and single-cell resolution,” Rockefeller’s Alipasha Vaziri said in a statement. “Light beads microscopy will allow us to investigate biological questions in a way that had not been possible before.” But the technique won’t replace standard two-photon microscopy, Vaziri says. Rather, he sees it as a complementary approach. Indeed, the growing quiver of imaging technologies, from those yielding static wiring diagrams to those recording function in vivo, will likely combine, quilt-like, to provide a far richer picture of how our brains do what they do. Researchers hope this kind of work can shed light on how the brain’s complex networks of neurons produce sensations, thoughts, and movement, what causes them to malfunction, and even to help us engineer our own intelligent systems in silicon. Image Credit: Alipasha Vaziri / The Rockefeller University

Sep 27

3 min 44 sec

It’s crunch time on climate change. The IPCC’s latest report told the world just how bad it is,’s bad. Companies, NGOs, and governments are scrambling for fixes, both short-term and long-term, from banning sale of combustion-engine vehicles to pouring money into hydrogen to building direct air capture plants. And one initiative, launched last week, is taking an “if you can name it, you can tame it” approach by creating an independent database that measures and tracks emissions all over the world. Climate TRACE, which stands for tracking real-time atmospheric carbon emissions, is a collaboration between nonprofits, tech companies, and universities, including CarbonPlan, Earthrise Alliance, Johns Hopkins Applied Physics Laboratory, former US Vice President Al Gore, and others. The organization started thanks to a grant from Google, which funded an effort to measure power plant emissions using satellites. A team of fellows from Google helped build algorithms to monitor the power plants (the Fellowship was created in 2019 to let Google employees do pro bono technical work for grant recipients). Climate TRACE uses data from satellites and other remote sensing technologies to “see” emissions. Artificial intelligence algorithms combine this data with verifiable emissions measurements to produce estimates of the total emissions coming from various sources. These sources are divided into ten sectors—like power, manufacturing, transportation, and agriculture—each with multiple subsectors (i.e., two subsectors of agriculture are rice cultivation and manure management). The total carbon emitted January 2015 to December 2020, by the project’s estimation, was 303.96 billion tons. The biggest offender? Electricity generation. It’s no wonder, then, that states, companies, and countries are rushing to make (occasionally unrealistic) carbon-neutral pledges, and that the renewable energy industry is booming. The founders of the initiative hope that, by increasing transparency, the database will increase accountability, thereby spurring action. Younger consumers care about climate change, and are likely to push companies and brands to do something about it. The BBC reported that in a recent survey led by the UK’s Bath University, almost 60 percent of respondents said they were “very worried” or “extremely worried” about climate change, while more than 45 percent said feelings about the climate affected their daily lives. The survey received responses from 10,000 people aged 16 to 25, finding that young people are the most concerned with climate change in the global south, while in the northern hemisphere those most worried are in Portugal, which has grappled with severe wildfires. Many of the survey respondents, independent of location, reportedly feel that “humanity is doomed.” Once this demographic reaches working age, they’ll be able to throw their weight around, and it seems likely they’ll do so in a way that puts the planet and its future at center stage. For all its sanctimoniousness, “naming and shaming” of emitters not doing their part may end up being both necessary and helpful. Until now, Climate TRACE’s website points out, emissions inventories have been largely self-reported (I mean, what’s even the point?), and they’ve used outdated information and opaque measurement methods. Besides being independent, which is huge in itself, TRACE is using 59 trillion bytes of data from more than 300 satellites, more than 11,100 sensors, and other sources of emissions information. “We’ve established a shared, open monitoring system capable of detecting essentially all forms of humanity’s greenhouse gas emissions,” said Gavin McCormick, executive director of coalition convening member WattTime. “This is a transformative step forward that puts timely information at the fingertips of all those who seek to drive significant emissions reductions on our path to net zero.” Given the scale of the project, the parties involved, and how ...

Sep 24

4 min 31 sec

As the inhabitants of an ancient Middle Eastern city now called Tall el-Hammam went about their daily business one day about 3,600 years ago, they had no idea an unseen icy space rock was speeding toward them at about 38,000 mph (61,000 kph). Flashing through the atmosphere, the rock exploded in a massive fireball about 2.5 miles (4 kilometers) above the ground. The blast was around 1,000 times more powerful than the Hiroshima atomic bomb. The shocked city dwellers who stared at it were blinded instantly. Air temperatures rapidly rose above 3,600 degrees Fahrenheit (2,000 degrees Celsius). Clothing and wood immediately burst into flames. Swords, spears, mudbricks, and pottery began to melt. Almost immediately, the entire city was on fire. Some seconds later, a massive shockwave smashed into the city. Moving at about 740 mph (1,200 kph), it was more powerful than the worst tornado ever recorded. The deadly winds ripped through the city, demolishing every building. They sheared off the top 40 feet (12 m) of the 4-story palace and blew the jumbled debris into the next valley. None of the 8,000 people or any animals within the city survived; their bodies were torn apart and their bones blasted into small fragments. About a minute later, 14 miles (22 km) to the west of Tall el-Hammam, winds from the blast hit the biblical city of Jericho. Jericho’s walls came tumbling down and the city burned to the ground. It all sounds like the climax of an edge-of-your-seat Hollywood disaster movie. How do we know that all of this actually happened near the Dead Sea in Jordan millennia ago? Getting answers required nearly 15 years of painstaking excavations by hundreds of people. It also involved detailed analyses of excavated material by more than two dozen scientists in 10 states in the US, as well as Canada and the Czech Republic. When our group finally published the evidence recently in the journal Scientific Reports, the 21 co-authors included archaeologists, geologists, geochemists, geomorphologists, mineralogists, paleobotanists, sedimentologists, cosmic-impact experts, and medical doctors. Here’s how we built up this picture of devastation in the past. Firestorm Throughout the City Years ago, when archaeologists looked out over excavations of the ruined city, they could see a dark, roughly 5-foot-thick (1.5 meter) jumbled layer of charcoal, ash, melted mudbricks, and melted pottery. It was obvious that an intense firestorm had destroyed this city long ago. This dark band came to be called the destruction layer. No one was exactly sure what had happened, but that layer wasn’t caused by a volcano, earthquake, or warfare. None of them are capable of melting metal, mudbricks, and pottery. To figure out what could, our group used the Online Impact Calculator to model scenarios that fit the evidence. Built by impact experts, this calculator allows researchers to estimate the many details of a cosmic impact event, based on known impact events and nuclear detonations. It appears that the culprit at Tall el-Hammam was a small asteroid similar to the one that knocked down 80 million trees in Tunguska, Russia in 1908. It would have been a much smaller version of the giant miles-wide rock that pushed the dinosaurs into extinction 65 million years ago. We had a likely culprit. Now we needed proof of what happened that day at Tall el-Hammam. Finding ‘Diamonds’ in the Dirt Our research revealed a remarkably broad array of evidence. The destruction layer also contains tiny diamonoids that, as the name indicates, are as hard as diamonds. Each one is smaller than a flu virus. It appears that wood and plants in the area were instantly turned into this diamond-like material by the fireball’s high pressures and temperatures. At the site, there are finely fractured sand grains called shocked quartz that only form at 725,000 pounds per square inch of pressure (5 gigapascals); imagine six 68-ton Abrams military tanks stacked on your thumb. Experiments with l...

Sep 23

8 min 26 sec

A little over a year ago, Google’s Project Loon launched in Kenya, 35 giant balloons with solar-powered electronics inside beaming a 4G signal to the central and western parts of the country. The project was ambitious; each balloon, when fully extended, was the size of a tennis court, and the plan was for them to hover in the stratosphere (20 kilometers above Earth), forming a mesh network to provide internet service to people in remote areas. Just six months after its debut, though, the project was discontinued. Loon’s CEO at the time, Alastair Westgarth, wrote, “We talk a lot about connecting the next billion users, but the reality is Loon has been chasing the hardest problem of all in connectivity—the last billion users: The communities in areas too difficult or remote to reach.we haven’t found a way to get the costs low enough to build a long-term, sustainable business.” Westgarth went on to extol the learnings from the project, of which there were many. And now, some of them are going into a new initiative, called Project Taara, that wouldn’t have been feasible without the headway made by Loon. To send data between Loon balloons, engineers used optic communication, or as Baris Erkmen, Taara’s Director of Engineering calls it in an X blog post, wireless optical communications (WOC). A laser sent out from one site transmits an invisible beam of light to a data receiver on another site. When two sites successfully link up (“like a handshake,” Erkmen says), the data being transmitted through the light beam creates a high-bandwidth internet connection. It’s a complicated handshake. To give us an idea of the precision required in the laser and the difficulty of achieving that precision, Erkmen writes, “Imagine pointing a light beam the width of a chopstick accurately enough to hit a five-centimeter target that’s ten kilometers away; that’s how accurate the signal needs to be to be strong and reliable.” His team, he adds, has spent years refining the technology’s atmospheric sensing, mirror controls, and motion detection capabilities; Taara’s terminals can now automatically adjust to changes in the environment to maintain precise connections. Project Taara aims to bridge a connectivity gap between the Republic of the Congo’s Brazzaville and the Democratic Republic of Congo’s Kinshasa. The cities lie just 4.8 kilometers (2.9 miles) apart, but between them is the Congo River—it’s the deepest river in the world (220 meters/720 feet in parts! Pretty terrifying, if you ask me), the second-fastest, and the only one that crosses the equator twice. That makes for some complicated logistics, and as such, internet connectivity in Kinshasa (which is on the river’s south bank) very expensive. Local internet providers are putting down 400 kilometers of fiber connection around the river, but in a textbook example of leapfrogging technology, Project Taara used WOC to beam high-speed connectivity over the river instead. The connection served almost 700 terabytes of data in 20 days with 99.9 percent reliability. That amount of data is “the equivalent of watching a FIFA World Cup match in HD 270,000 times.” Not too shabby. WOC isn’t immune to disturbances like fog, birds, and even monkeys, as Erkmen details in the blog post. But his team has developed network planning tools that estimate the technology’s viability in different areas based on factors like weather, and will focus on places where it’s most likely to work well; in any case, having occasional spotty service is better than no service at all. According to the Alliance for Affordable Internet, almost half of the world’s population still lacks internet access, and a large percentage of those who have it have low-quality connections, making features like online learning, video streaming, and telehealth inaccessible. A 2019 report by the organization found that only 28 percent of the African population has internet access through a computer, while 34 percent have access through a mobile ...

Sep 22

4 min 57 sec

Remember the philosophical argument our universe is a simulation? Well, a team of astrophysicists say they’ve created the biggest simulated universe yet. But you won’t find any virtual beings in it—or even planets or stars. The simulation is 9.6 billion light-years to a side, so its smallest structures are still enormous (the size of small galaxies). The model’s 2.1 trillion particles simulate the dark matter glue holding the universe together. Named Uchuu, or Japanese for “outer space,” the simulation covers some 13.8 billion years and will help scientists study how dark matter has driven cosmic evolution since the Big Bang. Dark matter is mysterious—we’ve yet to pin down its particles—and yet it’s also one of the most powerful natural phenomena known. Scientists believe it makes up 27 percent of the universe. Ordinary matter—stars, planets, you, me—comprise less than 5 percent. Cosmic halos of dark matter resist the dark energy pulling the universe apart, and they drive the evolution of large-scale structures, from the smallest galaxies to the biggest galaxy clusters. Of course, all this change takes an epic amount of time. It’s so slow that, to us, the universe appears as a still photograph. So scientists make simulations. But making a 3D video of almost the entire universe takes computer power. A lot of it. Uchuu commandeered all 40,200 processors in astronomy’s biggest supercomputer, ATERUI II, for a solid 48 hours a month over the course of a year. The results are gorgeous and useful. “Uchuu is like a time machine,” said Julia F. Ereza, a PhD student at IAA-CSIC. “We can go forward, backward, and stop in time. We can ‘zoom in’ on a single galaxy or ‘zoom out’ to visualize a whole cluster. We can see what is really happening at every instant and in every place of the Universe from its earliest days to the present.” Perhaps the coolest part is that the team compressed the whole thing down to a relatively manageable size of 100 terabytes and made it available to anyone. Obviously, most of us won’t have that kind of storage lying around, but many researchers likely will. This isn’t the first—and won’t be the last—mind-bogglingly big simulation. Rather, Uchuu is the latest member of a growing family tree dating back to 1970, when Princeton’s Jim Peebles simulated 300 “galaxy” particles on then-state-of-the-art computers. While earlier simulations sometimes failed to follow sensible evolutionary paths—spawning mutant galaxies or rogue black holes—with the advent of more computing power and better code, they’ve become good enough to support serious science. Some go big. Others go detailed. Increasingly, one needn’t preclude the other. Every few years, it seems, astronomers break new ground. In 2005, the biggest simulated universe was 10 billion particles; by 2011, it was 374 billion. More recently, the Illustris TNG project has unveiled impressively detailed (and yet still huge) simulations. Scientists hope that by setting up the universe’s early conditions and physical laws and then hitting play, their simulations will reproduce the basic features of the physical universe as we see it. This lends further weight to theories of cosmology and also helps explain or even make predictions about current and future observations. Astronomers expect Uchuu will help them interpret galaxy surveys from the Subaru Telescope in Hawaii and the European Space Agency’s Euclid space telescope, due for launch in 2022. Simulations in hand, scientists will refine the story of how all this came to be, and where it’s headed. (Learn more about the work in the team’s article published this month in the Monthly Notices of the Royal Astronomical Society.) Image Credit: A snapshot of the dark matter halo of the largest galaxy cluster formed in the Uchuu simulation. Tomoaki Ishiyama

Sep 17

4 min 23 sec

In 1953, a Harvard psychologist thought he discovered pleasure—accidentally—within the cranium of a rat. With an electrode inserted into a specific area of its brain, the rat was allowed to pulse the implant by pulling a lever. It kept returning for more: insatiably, incessantly, lever-pulling. In fact, the rat didn’t seem to want to do anything else. Seemingly, the reward center of the brain had been located. More than 60 years later, in 2016, a pair of artificial intelligence (AI) researchers were training an AI to play video games. The goal of one game, Coastrunner, was to complete a racetrack. But the AI player was rewarded for picking up collectable items along the track. When the program was run, they witnessed something strange. The AI found a way to skid in an unending circle, picking up an unlimited cycle of collectibles. It did this, incessantly, instead of completing the course. What links these seemingly unconnected events is something strangely akin to addiction in humans. Some AI researchers call the phenomenon “wireheading.” It is quickly becoming a hot topic among machine learning experts and those concerned with AI safety. One of us (Anders) has a background in computational neuroscience, and now works with groups such as the AI Objectives Institute, where we discuss how to avoid such problems with AI; the other (Thomas) studies history, and the various ways people have thought about both the future and the fate of civilization throughout the past. After striking up a conversation on the topic of wireheading, we both realized just how rich and interesting the history behind this topic is. It is an idea that is very of the moment, but its roots go surprisingly deep. We are currently working together to research just how deep the roots go: a story that we hope to tell fully in a forthcoming book. The topic connects everything from the riddle of personal motivation, to the pitfalls of increasingly addictive social media, to the conundrum of hedonism and whether a life of stupefied bliss may be preferable to one of meaningful hardship. It may well influence the future of civilization itself. Here, we outline an introduction to this fascinating but under-appreciated topic, exploring how people first started thinking about it. The Sorcerer’s Apprentice When people think about how AI might “go wrong,” most probably picture something along the lines of malevolent computers trying to cause harm. After all, we tend to anthropomorphize—think that nonhuman systems will behave in ways identical to humans. But when we look to concrete problems in present-day AI systems, we see other, stranger ways that things could go wrong with smarter machines. One growing issue with real-world AIs is the problem of wireheading. Imagine you want to train a robot to keep your kitchen clean. You want it to act adaptively, so that it doesn’t need supervision. So you decide to try to encode the goal of cleaning rather than dictate an exact—yet rigid and inflexible—set of step-by-step instructions. Your robot is different from you in that it has not inherited a set of motivations—such as acquiring fuel or avoiding danger—from many millions of years of natural selection. You must program it with the right motivations to get it to reliably accomplish the task. So, you encode it with a simple motivational rule: it receives reward from the amount of cleaning-fluid used. Seems foolproof enough. But you return to find the robot pouring fluid, wastefully, down the sink. Perhaps it is so bent on maximizing its fluid quota that it sets aside other concerns: such as its own, or your, safety. This is wireheading—though the same glitch is also called “reward hacking” or “specification gaming.” This has become an issue in machine learning, where a technique called reinforcement learning has lately become important. Reinforcement learning simulates autonomous agents and trains them to invent ways to accomplish tasks. It does so by penalizing them for fai...

Sep 17

24 min 37 sec

Walmart has been America’s biggest retailer since the 1990s, its focus on low costs and ultra-efficient logistics helping it edge out competitors and keep customers coming back. But Amazon has been gaining on Walmart, and the pandemic gave the online retail giant a huge boost. Both companies are continuously searching for ways to cut costs while meeting consumer needs. It seems one of the needs that’s steadily increasing is delivery. Whether due to busy schedules, health or safety concerns, or simply avoiding the stress of steering a loaded shopping cart up and down countless aisles, more people are trading in-store shopping for online shopping. Amazon clearly has a leg up over Walmart in that arena, but the brick-and-mortar retail king isn’t about to hand over its crown without a fight. Yesterday Walmart announced it’s adding a new delivery service for goods purchased online—and not just any delivery service, but one powered by driverless cars. The company has partnered with automaker Ford and autonomous car startup Argo AI, and plans to use Ford Escape hybrid cars outfitted with Argo’s self-driving software to make deliveries. The service will initially be available to customers in Austin, Miami, and the Washington DC area. Although headlines about the announcement are emphasizing the driverless aspect of the delivery vehicles, they will in fact still have human safety drivers for the time being. And it’s unclear whether customers will retrieve their orders from the cars—that would make the most sense if the ultimate goal is to eliminate the safety drivers—or if drivers will get orders from the cars to customers’ doorsteps. Consumer expectations around fast, seamless delivery are going up, probably thanks in large part to Amazon’s ability to meet those expectations (Prime customers have had the option of free same-day delivery since 2019). But same-day delivery has some serious side effects to consider, including outsize stress on workers and a negative environmental impact. As Patrick Browne, director of global sustainability at UPS, put it, “The time in transit has a direct relationship to the environmental impact. I don’t think the average consumer understands the environmental impact of having something tomorrow vs. two days from now.” Walmart and Amazon (and their smaller competitors) will continue to roll out services like autonomous delivery as long as consumers demand them (and are willing to pay for them). And as the big players in retail work to outdo each other and their technology improves, these services will likely drop in cost. As consumers, we’ll take any innovation that makes our lives easier or saves us time. But once in a while, it’s probably worth asking: how badly do we actually need paper towels or dental floss or whatever’s in our virtual shopping carts dropped at our doorsteps within hours? Sure, shopping can be a pain, and some orders are truly urgent. But as our expectations around effortless, fast gratification rise—and the market accordingly shapes itself to meet those expectations—we should be conscious of the associated non-monetary costs. A year ago Walmart launched its membership program, Walmart+, which includes benefits like prescription and fuel discounts and free grocery deliveries. Deutsche Bank estimates Walmart+ has about 32 million subscribers—and 86 percent of them also have Amazon Prime. To stay competitive and expand its membership program, Walmart is converting many of its stores into mini-warehouses with high-tech, automated systems. Amazon, for its part, recently patented a delivery system that involves several small “secondary vehicles” dispersing from a truck to leave packages on customers’ doorsteps. It appears the future of retail will involve a lot more technology and a lot less in-store shopping, whether you’re buying from Amazon, Walmart, Kroger, or any number of other stores. Image Credit: Jared Wickerham/Argo AI

Sep 16

4 min 11 sec

Self-driving cars are taking longer to hit roads than many experts predicted. Despite impressive progress in the field, (like trucks using self-driving features to move freight more efficiently, Waymo launching its robotaxi service for vetted riders in San Francisco, or Tesla rolling out version 10 of its full self-driving software), we’re a long way from ubiquitous Level 5 autonomy. What if there was some sort of in-between, a workaround to give us a glimpse of a future where empty cars deftly navigate city streets? A Berlin-based startup called Vay has come up with just such a solution, and it is, in short, creative, unexpected—and sort of ingenious. Rather than ceding full control of cars to software from the get-go, Vay plans to use human “teledrivers” to drive cars remotely. Sound a lot like a real-life video game? I thought so too—and in the most tangible of ways, it is. Teledrivers sit at stations that closely resemble an arcade game, complete with steering wheel, pedals, and monitors. Of course, in crucial ways the teledriving doesn’t resemble an arcade game. Vay emphasizes that its system was built with safety front of mind, with extra precautions against the top four causes of accidents in urban areas: driving under the influence, speeding, distraction, and fatigue. These days, distraction probably takes the cake, because let’s be honest, we all look at our phones while driving. Teledrivers, on the other hand, will be fully engaged with the driving environment (hopefully, their phones won’t even be within reach), and are vetted and trained by the company. The monitors they look at while operating a car also give them a 360-degree view around the car. Here’s how Vay’s service will work for consumers. Using a smartphone app, you’ll hail a car, much like you would with Uber or Lyft—except the car will pull up empty (having navigated to your location via a teledriver), and you’ll get in and drive yourself to your destination. Upon arrival, you get out and get on your way, and the teledriver takes over again, driving the car to its next passenger. Perhaps most intriguing of all, Vay claims its rides will cost just “a fraction” of what Uber and Lyft currently charge for rides. Between that and the added bonus of not having to make small talk with drivers or pool passengers (am I right, introverts?), Vay may really be onto something. The company’s CEO, Thomas von der Ohe, has some experience with automation, having worked on Amazon’s Alexa and at Zoox, a robotaxi startup Amazon bought in 2020. Scaling up the level of automation is one of Vay’s goals, though it seems they won’t be in a huge hurry to do so, saying it will launch autonomous features gradually based on data gathered by teledriving, and that it believes “we will enter a decade of human-machine collaboration instead of directly reaching full autonomy.” Again, they may be onto something. All the hype around self-driving cars has consumers eagerly anticipating their arrival, but between a complex regulatory environment, ongoing safety concerns, and the stark fact that exceeding or even matching the human brain’s ability to operate a vehicle is really, really hard, the “finish line” of Level 5 autonomy will likely remain elusive for years to come—if not a decade or more. In the meantime, providing alternate solutions that help ease us into a driverless future, maybe while saving us money and making roads safer, seems like a good course of action. Vay does have some big hurdles yet to clear. For one, it will be interesting to see what sort of solutions the company devises for matching supply to demand; Uber and Lyft use surge pricing when ride requests outnumber drivers, and drivers can choose to go on duty at busy times when prices are high. In Vay’s case, the number of teledrivers sitting at their arcade-game-like stations at any given time will be fixed. The company will also have to get approvals from regulators in the cities where it plans to offer its servic...

Sep 15

4 min 32 sec

Thanks to CRISPR, gene therapy and “designer babies” are now a reality. The gene editing Swiss army knife is one of the most impactful biomedical discoveries of the last decade. Now a new study suggests we’ve just begun dipping our toes into the CRISPR pond. CRISPR-Cas9 comes from lowly origins. It was first discovered as a natural mechanism in bacteria and yeast cells to help fight off invading viruses. This led Dr. Feng Zhang, one of the pioneers of the technology, to ask: where did this system evolve from? Are there any other branches of the CRISPR family tree that we can also harness for gene editing? In a new paper published last week in Science, Zhang’s team traced the origins of CRISPR to unveil a vast universe of potential gene editing tools. As “cousins” of CRISPR, these new proteins can readily snip targeted genes inside Petri dishes, similar to their famous relative. But unlike previous CRISPR variants, these are an entirely new family line. Collectively dubbed OMEGA, they operate similarly to CRISPR. However, they use completely foreign “scissor” proteins, along with alien RNA guides previously unfamiliar to scientists. What came as a total surprise was the abundance of these alternative systems. A big data search found over a million potential genetic sites that encode just one of these cousins, far more widespread “than previously suspected.” These newly-discovered classes of proteins have “strong potential for developing as biotechnologies,” the authors said. In other words, the next gene editing wunderkind could be silently waiting inside another bacteria or algae, ready to be re-engineered to snip, edit, and alter our own genomes for the next genetic revolution. The Many Variations of CRISPR The first CRISPR system that came to fame was CRISPR-Cas9. The idea is simple but brilliant. Using a genetic vector—a round Trojan horse of sorts that delivers genes into cells—scientists can encode the two components for gene editing. One is a guide RNA, which directs the system to the target gene. The other is Cas9, the “scissors” that break the gene. Once a gene is snipped, it wants to heal. During this process it’s possible to insert new genetic code, delete old code, or shift the code in a way that inactivates subsequent genes. Thanks to its relative simplicity, CRISPR didn’t just take off—it skyrocketed. Subsequent studies found variants optimized for slightly different tasks. For example, there are Cas9 varieties that have very low off-target activity or are smaller, making them easier to package and deliver into cells. Others include base editors, which swap a DNA letter without breaking the chain, or RNA editors, which edit RNA chains like a Word processor. The burgeoning CRISPR pantheon was in part because of different Cas “scissor” proteins. Although thousands of variations exist, wrote Dr. Lucas Harrington at the University of California, Berkeley, who worked with CRISPR pioneer Dr. Jennifer Doudna, “gene editing experiments have largely focused on a small subset of representatives.” Scanning for new variants in nature, the team identified powerful new Cas proteins that retain their activity in high heat, and extremely compact ones that can sneak into nooks and crannies of the genome that otherwise block classic Cas proteins. The power of Cas variants persuaded scientists to artificially evolve new proteins with more optimized features. But what if the secret to better gene editing tools isn’t just looking forward? What if it’s to peek back in time? CRISPR Ancestors The new study took this approach: scan through evolutionary history to trace the origins of CRISPR-Cas9. Like tracing any family tree, it starts with knowing thyself. Cas9 belongs to a family called “RNA-guided nucleases.” Basically, these proteins can be shepherded by RNA guides, and they have the ability to cut genetic material. Back in 2015, a study suggested one evolutionary root of Cas9. It’s weird: a bunch of “jumping genes,” or genetic com...

Sep 14

8 min 25 sec

The world’s largest car manufacturer by volume has been sluggish in its efforts to electrify compared to competitors. But Toyota has just announced a huge investment in battery technology that may be a sign it’s shifting course. Although Toyota’s Prius hybrid was the first electrified vehicle to really hit the mainstream, the company failed to capitalize on its early lead. It still doesn’t sell a fully electric vehicle in either the US or Japan, at a time when more or less every major automaker—from Volvo to Volkswagen—has at least one model powered by batteries alone. The company seems to be belatedly joining the party after executives announced that it would invest $13.6 billion in battery technology over the next decade. This includes $9 billion to be spent on manufacturing, which will see it scale up to 10 battery production lines by 2025 and ultimately up to around 70. During a press briefing, chief technology officer Masahiko Maeda said part of the company’s plan is to reduce the cost of batteries by 30 percent or more through innovations in materials and new designs. They are also working on ways to reduce the amount of energy the car draws from those batteries by 30 percent. All of this follows from the company’s April announcement that it plans to release 70 electric cars around the world by 2025, suggesting that it’s finally joining the consensus among automakers that electric vehicles are the future. But as noted by Green Car Reports, only 15 of those 70 cars will be fully electric, with the rest made up of hybrids or hydrogen vehicles, which the company has also been pushing for a number of years. In contrast, many competitors have announced plans to go fully electric in the coming decade. Toyota’s reluctance to double down on electric vehicles is all the more confusing considering it is seen as a global leader in developing batteries for electric vehicles. It’s also a frontrunner in the quest to commercialize solid-state batteries, which could significantly increase energy density and therefore the range of electric vehicles. The explanation seems to lie in the fact that, despite being an early leader in electric cars, Toyota considered electrification a stopgap until cars powered by hydrogen fuel cells could replace gasoline ones. While the company does sell one hydrogen-powered car, their expense and lack of fueling infrastructure means adoption is lagging. Given that the reason for replacing gasoline vehicles is climate change, the fact that hydrogen still has a long way to go until it’s truly green suggests that a future for decarbonizing transport using fuel cells is still a distant dream. Perhaps surprisingly for a company that led the initial charge to create a greener future for the car, Toyota has even been lobbying against the transition to electric vehicles, according to the New York Times. While this is probably at least partly an effort to protect its investments in non-battery-focused transport technologies, the company’s argument is that a transition to electric vehicles as rapid as many are suggesting is not practical given the current state of the technology. Last year, Toyota president Akio Toyoda claimed Japan would run out of electricity if it switched entirely to electric vehicles, unless it spent hundreds of billions of dollars on upgrading its power network. More recently, company director Shigeki Terashi said it was still too early to put all of our eggs in the electric vehicle basket. So while this new battery investment will certainly be a major boon to efforts to electrify vehicles, it seems Toyota is still not fully on board with the electric vehicle revolution. Image Credit: Toyota

Sep 13

3 min 51 sec

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science. It’s not hard to see why. Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds. Indeed, there’s already a productive flow of knowledge between the fields. Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons. You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output. Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm. The brain works like this too. But the keyword above is loosely. Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex. In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are. In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex. Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI. “[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week. Amazing Brains Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon. On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes). To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron? First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own. They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model. (Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.) They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five...

Sep 12

7 min 15 sec

Global warming is a big challenge for warm-blooded animals, which must maintain a constant internal body temperature. As anyone who’s experienced heatstroke can tell you, our bodies become severely stressed when we overheat. Animals are dealing with global warming in various ways. Some move to cooler areas, such as closer to the poles or to higher ground. Some change the timing of key life events such as breeding and migration, so they take place at cooler times. And others evolve to change their body size to cool down more quickly. Our new research examined another way animal species cope with climate change: by changing the size of their ears, tails, beaks, and other appendages. We reviewed the published literature and found examples of animals increasing appendage size in parallel with climate change and associated temperature increases. In doing so, we identified multiple examples of animals that are most likely “shape-shifters.” The pattern is widespread, and suggests climate warming may result in fundamental changes to animal form. Adhering to Allen’s Rule It’s well known that animals use their appendages to regulate their internal temperature. African elephants, for example, pump warm blood to their large ears, which they then flap to disperse heat. The beaks of birds perform a similar function—blood flow can be diverted to the bill when the bird is hot. This means there are advantages to bigger appendages in warmer environments. In fact, as far back as the 1870s, American zoologist Joel Allen noted in colder climates, warm-blooded animals (also known as endotherms) tended to have smaller appendages, while those in warmer climates tend to have larger ones. This pattern became known as Allen’s rule, which has since been supported by studies of birds and mammals. Biological patterns such as Allen’s rule can also help make predictions about how animals will evolve as the climate warms. Our research set out to find examples of animal shape-shifting over the past century, consistent with climatic warming and Allen’s rule. Which Animals Are Changing? We found most documented examples of shape-shifting involve birds—specifically, increases in beak size. This includes several species of Australian parrots. Studies show the beak size of gang-gang cockatoos and red-rumped parrots has increased by between four percent and ten percent since 1871. Mammal appendages are also increasing in size. For example, in the masked shrew, tail and leg length have increased significantly since 1950. And in the great roundleaf bat, wing size increased by 1.64 percent over the same period. The variety of examples indicates shape-shifting is happening in different types of appendages and in a variety of animals, in many parts of the world. But more studies are needed to determine which kinds of animals are most affected. Other Uses of Appendages Of course, animal appendages have uses far beyond regulating body temperature. This means scientists have sometimes focused on other reasons that might explain changes in animal body shape. For example, studies have shown the average beak size of the Galapagos medium ground finch has changed over time in response to seed size, which is in turn influenced by rainfall. Our research examined previously collected data to determine if temperature also influenced changes in beak size of these finches. These data do demonstrate rainfall (and, by extension, seed size) determines beak size. After drier summers, survival of small-beaked birds was reduced. But we found clear evidence that birds with smaller beaks are also less likely to survive hotter summers. This effect on survival was stronger than that observed with rainfall. This tells us the role of temperature may be as important as other uses of appendages, such as feeding, in driving changes in appendage size. Our research also suggests we can make some predictions about which species are most likely to change appendage size in response to increasing tempe...

Sep 10

5 min 43 sec

A little over four years ago, the world’s first commercial plant for sucking carbon dioxide out of the air opened near Zurich, Switzerland. The plant was powered by a waste heat recovery facility, with giant fans pushing air through a filtration system that trapped the carbon. The carbon was then separated and sold to buyers, such as a greenhouse that used it to help grow vegetables. The plant ran as a three-year demonstration project, capturing an estimated 900 tons of CO2 (the equivalent to the annual emissions of 200 cars) per year. This week, a plant about four times as large as the Zurich facility started operating in Iceland, joining 15 other direct air capture (DAC) plants that currently operate worldwide. According to the IEA, these plants collectively capture more than 9,000 tons of CO2 per year. Christened Orca after the Icelandic word for energy, the new plant was built by Swiss company Climeworks in partnership with Icelandic carbon storage firm Carbfix. Orca is the largest of existing facilities of its type, able to capture 4,000 tons of carbon per year. That’s equal to the emissions of 790 cars. The plant consists of eight “collector containers” each about the size and shape of a standard shipping container. Their fans run on energy from a nearby geothermal power plant, which was part of the reason this location made sense; Iceland has an abundance of geothermal energy, not to mention a subterranean geology that lends itself quite well to carbon sequestration. Orca was built on a lava plateau in the country’s southwest region. This plant works a little differently than the Zurich plant, in that the captured carbon is liquefied then pumped underground into basalt caverns. Over time (less than two years, according to Carbfix’s website), it turns to stone. One of the biggest issues with direct air capture is that it’s expensive, and this facility is no exception. Climeworks co-founder Christoph Gebald estimates it’s currently costing $600 to $800 to remove one metric ton of carbon. Costs would need to drop to around a sixth of this level for the company to make a profit. Gebald thinks Climeworks can get costs down to $200 to $300 per ton by 2030, and half that by 2040. The National Academy of Sciences estimated that once the cost of CO2 extraction gets below $100-150 per ton, the air-captured commodity will be economically competitive with traditionally-sourced oil. The other problem that detractors of DAC cite is its energy usage relative to the amount of CO2 it’s capturing. These facilities use a lot of energy, and they’re not making a lot of difference. Granted, the energy they use will come from renewable sources, but we’re not yet to the point where that energy is unlimited or free. An IEA report from May of this year stated that to reach the carbon-neutral targets that have been set around the world, almost one billion metric tons of CO2 will need to be captured using DAC every year. Our current total of 9,000 tons is paltry in comparison. But Climeworks and other companies working on DAC technology are optimistic, saying that automation and increases in energy efficiency will drive down costs. “This is a market that does not yet exist, but a market that urgently needs to be built,” Gebald said. “This plant that we have here is really the blueprint to further scale up and really industrialize.” Image Credit: Climeworks

Sep 9

3 min 48 sec

Between the grim outlook reported by the IPCC’s Sixth Assessment Report last month and frequent reports of extreme weather events all over the world, the climate crisis feels like it’s getting more dire by the week. Accordingly, calls for action are intensifying, and companies and governments are scrambling for solutions. Renewables are ramping up, innovative energy storage technologies are being brought to the table, and pledges to go carbon-neutral are piling up as fast as, well, carbon. South Korea’s Hyundai Motor Group has joined the fray, but on a path that diverges a bit from the crowd; they’re going all-in on hydrogen. At the company’s aptly named Hydrogen Wave Forum this week, it unveiled multiple hydrogen-powered concept vehicles, as well as a strategy for building up its presence in the hydrogen space over the next few years (and decades). The company unveiled a ground shipping concept it’s calling the Trailer Drone, which sits on a fuel-cell-powered chassis called the e-Bogie. The e-Bogies, named after the frames train cars sit on, have four-wheel independent steering that lets them move in ways normal cars and trucks can’t, like sideways (in crab fashion) or in circles. The modular e-Bogies can be combined to carry different-sized trailers, and can go an estimated 621 miles (1,000 kilometers) on a single fill-up. The system would be autonomous, and the concept doesn’t include a cab or seat for a human driver. Hyundai also unveiled a hydrogen-powered concept sports car called the Vision FK. The car is a plug-in hybrid, meaning the fuel cell charges a traditional battery. The 500-kilowatt fuel cell gives the car the ability to go from 0 to 100 kilometers per hour in under 4 seconds. The carmaker didn’t give a timeline for when (or whether) the Vision FK would enter production, though. Finally, Hyundai said it’s working on hydrogen-powered versions of its existing commercial vehicles, and plans to bring those to market by 2028. Hyundai is by no means new to the hydrogen game; the company already has fuel-cell-powered trucks and buses on the roads, including its Xcient truck, which is in use in Switzerland, and its Elec City Fuel Cell bus, which is on roads in South Korea and being trialed in Germany. One of the technology’s biggest detractors is none other than Elon Musk, who finds hydrogen fuel cells “extremely silly.” But Toyota would disagree with Musk’s take; the company is building a hydrogen-powered prototype city near the base of Mount Fuji called Woven City. For its part, Hyundai is aiming to get its fuel cell powertrain to a point where it can compete cost-wise with electric vehicle batteries by 2030. A study released earlier this year by McKinsey’s hydrogen council found that when you factor in the relative efficiencies of the power sources and lifetime costs of a truck, green hydrogen could reach cost parity with diesel by 2030. A paper published in Joule last month laid out a road map for building a green hydrogen economy. Despite these promising outlooks, it’s still highly uncertain whether hydrogen will become a widespread, cost-effective energy source. But it seems we’re getting to a point where it’s worth looking into any option that could make the future of the planet look brighter than it does right now. Image Credit: Hyundai

Sep 8

3 min 33 sec

CRISPR has revolutionized genome engineering, but the size of its molecular gene-editing components has limited its therapeutic uses so far. Now, a trio of new research papers detail compact versions of the gene-editing tool that could significantly expand its applications. While we’ve been able to edit genomes since the 1990s, the introduction of CRISPR in 2015 transformed the field thanks to its flexibility, simplicity, and efficiency. The technology is based on a rudimentary immune system found in microbes that combines genetic mugshots of viruses with an enzyme called Cas9 that hunts them down and chops up their DNA. This system can be re-purposed by replacing the viral genetic code with whatever sequence you want to edit and precisely snipping the DNA at that location. One outstanding problem, however, is that the system’s large physical size makes it hard to deliver to cells effectively. Adeno-associated viral vectors (AAVs)—which are small, non-pathogenic viruses that can be re-purposed to inject genetic code into cells—are the gold standard delivery system for in vivo gene therapies. They produce little immune response and have received FDA approval for therapeutic use, but their tiny size makes using them to deliver CRISPR tricky. Now, however, three research papers published last week show that a family of tiny Cas proteins derived from archaea are small enough to fit in AAVs and can edit human DNA. The most commonly used Cas9 protein comes from the Streptococcus pyogenes bacteria, which is 1,368 amino acids long. When combined with the RNA sequence needed to guide it to its target, that’s too big to fit in an AAV. And while you can deliver them separately this significantly reduces efficiency as you can’t guarantee every cell will receive both. But there’s considerable diversity in the proteins used in natural CRISPR systems, so researchers have been screening the microbial world for smaller alternatives. Two promising candidates are Cas9 proteins from Staphylococcus aureus and Streptococcus thermophilus, which are 1,053 and 1,121 amino acids long, respectively. Their relatively smaller size makes it possible to package them in an AAV along with their guide RNA. That said, even these two smaller alternatives may not be small enough. In recent years CRISPR’s capabilities have been expanded significantly, going from simply snipping a single gene to inserting genes, swapping DNA letters, or targeting multiple sites at once. All of this requires much more genetic material to be delivered into the cell, which rules out AAVs. Yet another family of proteins known as Cas12f has garnered attention for their tiny proportions—generally between 400 and 700 amino acids—but it was not known whether they could be coaxed to work outside microbes. That’s where last week’s papers, published in Nature Biotechnology and Nature Chemical Biology, come in. Scientists showed that proteins from this family could be packaged inside AAVs along with their guide RNA and delivered to human cells to make effective edits. A third paper in Molecular Cell used protein engineering to transform a Cas12f protein that didn’t appear to work in mammalian cells into one that did. While the team didn’t actually test whether they could deliver the protein using AAVs, they showed it was small enough, even when combined with a variety of advanced tools like prime and base editors. These breakthroughs could provide a significant boost to in vivo therapies. CRISPR delivered by lipid nanoparticles—the same mechanism used in mRNA vaccines—is already making its way into the clinic, but this approach is significantly less efficient than AAVs. These are still very early stage studies though, and despite the promising editing performance, it will take a lot more research to properly characterize the capabilities and safety profiles of these new proteins. If they do turn out to be as effective as the original CRISPR system however, they could dramatically expand the...

Sep 5

4 min 26 sec

Today, wireless charging is little more than a gimmick for high-end smartphones or pricey electric toothbrushes. But a new approach that can charge devices anywhere in a room could one day allow untethered factories where machinery is powered without cables. As the number of gadgets we use has steadily grown, so too has the number of cables and chargers cluttering up our living spaces. This has spurred growing interest in wireless charging systems, but the distances they work over are very short, and they still have to be plugged into an outlet. So, ultimately, they make little difference. Now though, researchers have devised a way to wirelessly power small electronic devices anywhere in a room. It requires a pretty hefty retrofit of the room itself, but the team says it could eventually be used to power everything from mobile robots in factories to medical implants in people. “This really ups the power of the ubiquitous computing world,” Alanson Sample, from the University of Michigan, said in a press release. “You could put a computer in anything without ever having to worry about charging or plugging in.” Efforts to beam power over longer distances have typically used microwaves to transmit it. But such approaches require large antennas and targeting systems. They also present risks for spaces where humans are present because microwaves can damage biological tissue. Commercial wireless chargers instead rely on passing a current through a wire charging coil to create a magnetic field, which induces an electric current in a wire receiving coil installed in the device you want to charge. However, the approach only works over very short distances—roughly equal to the diameter of the charging coil. The new approach, outlined in a paper in Nature Electronics, works on similar principles, but essentially turns the entire room into a giant magnetic charger, allowing any device within the room that has a receiving coil to draw power. To build the system, Sample and colleagues from the University of Tokyo installed conductive aluminum panels in the room’s walls, floor, and ceiling and inserted a large copper pole in the middle of it. They then mounted devices, called lumped capacitors, in rows running horizontally through the middle of each panel and at the center of the pole. When current passes through the panels, it’s channeled into the capacitors, generating magnetic fields that permeate the 100-square-foot room and deliver 50 watts of power to any devices in it. Importantly, the capacitors also isolate potentially harmful electric fields within themselves. As a result, the team showed the system doesn’t exceed Federal Communications Commission (FCC) guidelines for electromagnetic energy exposure. This is actually the second incarnation of this technology. Sample first introduced the idea in a 2017 paper in PLOS ONE while working for Disney. But the latest research solves a crucial limitation of the earlier work. Previously the system produced a single magnetic field that swirled in a circle around the central pole, resulting in dead spots in the corners of the square room. The new setup creates two simultaneous magnetic fields, one spinning around the pole and another concentrated near the walls themselves. This way the researchers were able to achieve charging efficiency above 50 percent in 98 percent of the room compared to only 5.75 percent of the room for the previous iteration. They also found that if they only relied on the second magnetic field, they could remove the obstructive pole and still get reasonable charging in most of the room (apart from right at the center). While that’s a significant improvement, it still means that on average 50 percent of the power coming out of the wall socket is wasted. Such low efficiencies are a common problem for wireless charging, as an investigation by OneZero found last year. Given the small amount of power required to charge everyday devices it’s unlikely to have an especially n...

Sep 3

5 min 8 sec

How rare is our solar system? In the 30 years or so since planets were first discovered orbiting stars other than our sun, we have found that planetary systems are common in the galaxy. However, many of them are quite different from the solar system we know. The planets in our solar system revolve around the sun in stable and almost circular paths, which suggests the orbits have not changed much since the planets first formed. But many planetary systems orbiting around other stars have suffered from a very chaotic past. The relatively calm history of our solar system has favored the flourishing of life here on Earth. In the search for alien worlds that may contain life, we can narrow down the targets if we have a way to identify systems that have had similarly peaceful pasts. Our international team of astronomers has tackled this issue in research published in Nature Astronomy. We found that between 20 and 35 percent of sun-like stars eat their own planets, with the most likely figure being 27 percent. This suggests at least a quarter of planetary systems orbiting stars similar to the sun have had a very chaotic and dynamic past. Chaotic Histories and Binary Stars Astronomers have seen several exoplanetary systems in which large or medium-sized planets have moved around significantly. The gravity of these migrating planets may also have perturbed the paths of the other planets or even pushed them into unstable orbits. In most of these very dynamic systems, it is also likely some of the planets have fallen into the host star. However, we didn’t know how common these chaotic systems are relative to quieter systems like ours, whose orderly architecture has favored the flourishing of life on Earth. Even with the most precise astronomical instruments available, it would be very hard to work this out by directly studying exoplanetary systems. Instead, we analyzed the chemical composition of stars in binary systems. Binary systems are made up of two stars in orbit around one another. The two stars generally formed at the same time from the same gas, so we expect they should contain the same mix of elements. However, if a planet falls into one of the two stars, it is dissolved in the star’s outer layer. This can modify the chemical composition of the star, which means we see more of the elements that form rocky planets, such as iron, than we otherwise would. Traces of Rocky Planets We inspected the chemical makeup of 107 binary systems composed of sun-like stars by analyzing the spectrum of light they produce. From this, we established how many of stars contained more planetary material than their companion star. We also found three things that add up to unambiguous evidence that the chemical differences observed among binary pairs were caused by eating planets. First, we found that stars with a thinner outer layer have a higher probability of being richer in iron than their companion. This is consistent with planet-eating, as when planetary material is diluted in a thinner out layer it makes a bigger change to the layer’s chemical composition. Second, stars richer in iron and other rocky-planet elements also contain more lithium than their companions. Lithium is quickly destroyed in stars, while it is conserved in planets. So an anomalously high level of lithium in a star must have arrived after the star formed, which fits with the idea that the lithium was carried by a planet until it was eaten by the star. Third, the stars containing more iron than their companion also contain more than similar stars in the galaxy. However, the same stars have standard abundances of carbon, which is a volatile element and for that reason is not carried by rocks. Therefore these stars have been chemically enriched by rocks, from planets or planetary material. The Hunt for Earth 2.0 These results represent a breakthrough for stellar astrophysics and exoplanet exploration. Not only have we found that eating planets can change the chemical compositi...

Sep 2

4 min 56 sec

The Intergovernmental Panel on Climate Change released its Sixth Assessment Report in early August, and the outlook isn’t good. The report has added renewed urgency to humanity’s effort to curb climate change. The price of solar energy dropped 89 percent in 10 years, and new wind farms are being built both on land and offshore (with ever-bigger turbines capable of generating ever more energy). But simply adding more wind and solar generation capacity won’t get us very far if we don’t have a cost-effective, planet-friendly way to store the energy they produce. As Zia Huque, general partner at Prime Movers Lab, put it, “To truly harness the power of renewable energy, the world needs to develop reliable, flexible storage solutions for when the sun does not shine or the wind does not blow.” A startup called Energy Vault is working on a unique storage method, and they must be on the right track, because they just received over $100 million in Series C funding last week. The method was inspired by pumped hydro, which has been around since the 1920s and uses surplus generating capacity to pump water up into a reservoir. When the water is released, it flows down through turbines and generates energy just like conventional hydropower. Now imagine the same concept, but with heavy solid blocks and a tall tower rather than water and a reservoir. When there’s excess power—on a sunny or windy day with low electricity demand, for example—a mechanical crane uses it to lift the blocks 35 stories into the air. Then the blocks are held there until demand is outpacing supply. When they’re lowered to the ground (or lowered a few hundred feet through the air), their weight pulls cables that spin turbines, generating electricity. “Heavy” blocks in this case means 35 tons (70,000 pounds or 31,751 kg). The blocks are made of a composite material that uses soil and locally-sourced waste, which can include anything from concrete debris and coal ash to decommissioned wind turbine blades (talk about coming full circle). Besides putting material that would otherwise go into a landfill to good use, this also means the blocks can be made locally, and thus don’t need to be transported (and imagine the cost and complexity of transporting something that heavy, oy). The cranes that lift and lower the blocks have six arms, and they’re controlled by fully-automated custom software. Energy Vault says the towers will have a storage capacity up to 80 megawatt-hours, and be able to continuously discharge 4 to 8 megawatts for 8 to 16 hours. The technology is best suited for long-duration storage with very fast response times. The Series C funding was led by Prime Movers Lab, with existing investors SoftBank and Saudi Aramco adding additional funds and several new investors joining. Energy Vault plans to use the funding to roll out its EVx platform, launched in April of this year. The platform includes performance enhancements like round-trip efficiency up to 85 percent, a lifespan of over 35 years, and a flexible, modular design that’s shorter than the original—which means it could more easily be built in or near densely-populated areas. Huque called Energy Vault a “gamechanger” in the transition to green energy, saying the company “has cracked the code with a transformative solution.designed to fulfill clean energy demand 24/7 with a more efficient, durable, and environmentally sustainable approach.” The company will roll out its EVx platform in the US late this year, moving on to fulfill contracts in Europe, the Middle East, and Australia in 2022. Image Credit: Energy Vault

Sep 1

3 min 57 sec

Deep learning is solving biology’s deepest secrets at breathtaking speed. Just a month ago, DeepMind cracked a 50-year-old grand challenge: protein folding. A week later, they produced a totally transformative database of more than 350,000 protein structures, including over 98 percent of known human proteins. Structure is at the heart of biological functions. The data dump, set to explode to 130 million structures by the end of the year, allows scientists to foray into previous “dark matter”—proteins unseen and untested—of the human body’s makeup. The end result is nothing short of revolutionary. From basic life science research to developing new medications to fight our toughest disease foes like cancer, deep learning gave us a golden key to unlock new biological mechanisms—either natural or synthetic—that were previously unattainable. Now, the AI darling is set to do the same for RNA. As the middle child of the “DNA to RNA to protein” central dogma, RNA didn’t get much press until its Covid-19 vaccine contribution. But the molecule is a double hero: it both carries genetic information, and—depending on its structure—can catalyze biological functions, regulate which genes are turned on, tweak your immune system, and even crazier, potentially pass down “memories” through generations. It’s also frustratingly difficult to understand. Similar to proteins, RNA also folds into complicated 3D structures. The difference, according to Drs. Rhiju Das and Ron Dror at Stanford University, is that we comparatively know little about these molecules. There are 30 times as many types of RNA as there are proteins, but the number of deciphered RNA structures is less than one percent compared to proteins. The Stanford team decided to bridge that gap. In a paper published last week in Science, they described a deep learning algorithm called ARES (Atomic Rotationally Equivalent Scorer) that efficiently solves RNA structures, blasting previous attempts out of the water. The authors “have achieved notable progress in a field that has proven recalcitrant to transformative advances,” said Dr. Kevin Weeks at the University of North Carolina, who was not involved in the study. Even more impressive, ARES was trained on only 18 RNA structures, yet was able to extract substantial “building block” rules for RNA folding that’ll be further tested in experimental labs. ARES is also input agnostic, in that it isn’t specifically tailored to RNA. “This approach is applicable to diverse problems in structural biology, chemistry, materials science, and beyond,” the authors said. Meet RNA The importance of this biomolecule for our everyday lives is probably summarized as “Covid vaccine, mic drop.” But it’s so much more. Like proteins, RNA is transcribed from DNA. It also has four letters, A, U, C, and G, with A grabbing U and C tethered to G. RNA is a whole family, with the most well-known type being messenger RNA, or mRNA, which carries the genetic instructions to build proteins. But there’s also transfer RNA, or tRNA—I like to think of this as a transport drone—that grabs onto amino acids and shuttles them to the protein factory, microRNA that controls gene expression, and even stranger cousins that we understand little about. Bottom line: RNA is both a powerful target and inspiration for genetic medicine or vaccines. One way to shut off a gene without actually touching it, for example, is to kill its RNA messenger. Compared to gene therapy, targeting RNA could have fewer unintended effects, all the while keeping our genetic blueprint intact. In my head, RNA often resembles tangled headphones. It starts as a string, but subsequently tangles into a loop-de-loop—like twisting a rubber band. That twisty structure then twists again with surrounding loops, forming a tertiary structure. Unlike frustratingly annoying headphones, RNA twists in semi-predictable ways. It tends to settle into one of several structures. These are kind of like the shape your body contorts ...

Aug 31

8 min 39 sec

3D printing is picking up speed as a construction technology, with 3D printed houses, schools, apartment buildings, and even Martian habitat concepts all being unveiled in the last year (not to mention Airbnbs and entire luxury communities). Now another type of structure is being added to this list: military barracks. ICON, a construction technologies startup based in Austin, Texas, announced the project earlier this month in partnership with the Texas Military Department. At 3,800 square feet, the barracks will be the biggest 3D printed structure in North America. It’s edged out for the worldwide title by at least one other building, a 6,900-square-foot complex in Dubai used for municipal offices. The barracks are located at the Camp Swift Training Center in Bastrop, TX, and are replacing temporary facilities that have already been used for longer than their intended lifespan. 72 soldiers will stay in the building, sleeping in bunk beds, while they train for missions and prepare for deployment. “The printed barracks will not only provide our soldiers a safe and comfortable place to stay while they train, but because they are printed in concrete, we anticipate them to last for decades,” said Colonel Zebadiah Miller, the Texas Military Department’s director of facilities. The energy-efficient barracks are being built with ICON’s Vulcan 3D printer, the initial iteration of which was 11.5 feet tall by 33 feet wide, made up of an axis set on a track. The printer’s “ink” is a proprietary concrete mix, which it puts down in stacked layers from the ground up. The building was designed by Austin-based Logan Architecture, a firm that had previously worked with ICON on the East 17th Residences, four partially 3D printed homes that went on the market in Austin earlier this year. Soldiers will move into the barracks this fall. With the announcement of $207 million raised in Series B funding this week, ICON is well-positioned to launch several more projects in the coming months and years. In May the company unveiled both its new Vulcan printer—1.5 times larger and 2 times faster than the previous version—and its House Zero line of homes, optimized and designed specifically for construction via 3D printing. TechCrunch reported last week that ICON’s revenue has grown by 400 percent every year since its 2018 launch. The startup tripled its team in the past year, hitting the benchmark of over 100 employees, and plans to double in size again within the next year. What this all comes down to is that a lot of people and organizations are seeing the benefits of 3D printing as a construction tool, and at the rate the technology is growing, houses and barracks are just the beginning. “ICON continues our missional work to deliver dignified, resilient shelter for social housing, disaster-relief housing, market-rate homes, and now, homes for those serving our country,” said ICON co-founder Evan Loomis. “We are scaling this technology across Texas, the US, and eventually the world. This is the beginning of a true paradigm shift in homebuilding.” Image Credit: ICON

Aug 30

3 min 20 sec

In the search for life beyond our planet, scientists have long focused on conditions similar to those found here. This makes sense. Earth is the only place we know life exists. But in recent years, exactly where such conditions might occur—in particular the presence of liquid water oceans—has expanded. Scientists now believe there are oceans with the potential for life in the interiors of moons, dwarf planets, and even large asteroids. Space agencies plan to visit some of these interior ocean worlds in the not-too-distant future. But of course, we can’t yet visit or study such tiny planetary bodies around other stars. So scientists are focused on roughly Earth-sized exoplanets in the search for life. Earth-like planets in the habitable zone, where liquid water on the surface is possible, do exist—and are even relatively abundant, scientists believe—but they may not be the only or even best place to search for life given the observational tools at hand. A recent paper by a Cambridge team published in The Astrophysical Journal identified a new class of exoplanet with the potential for life—despite relatively extreme conditions. These planets, classified as Hycean worlds (a mashup of hydrogen and ocean), are bigger than Earth and smaller than Neptune. Typically this class of exoplanets is divided into rocky planets like Earth, dubbed super-Earths, and ice giants like Neptune, called mini-Neptunes. Hycean worlds sit in between the two. They have extensive atmospheres dominated by hydrogen and large, planet-wide oceans. They can be up to 2.6 times larger than Earth with temperatures as high as 395 degrees Fahrenheit (202 degrees Celsius). Despite crushing pressures and scorching temperatures, the team defines these conditions “habitable” because we know they’re amenable to microbial life we see on Earth in extreme oceanic environments. Previously, another recent study by the same team of researchers looked at a specific mini-Neptune (K2-18b) and found that under certain circumstances, such planets could host life. This led them to model the full range of planetary and stellar attributes that, in theory, would allow for the possibility of life and to look into the likelihood we could detect it. The team says Hycean planets could broadly expand the number of worthy exoplanets in our search. This is, in part, because the Hycean habitable zone is wider than the habitable zone for Earth-like planets. Further, conditions suitable for life could exist on the dark sides of tidally locked Hycean planets close to their stars (where one side never faces their sun) as well as more distant, relatively chilly planets receiving less sunlight. They also may be very common. Of the thousands of exoplanets discovered, the “vast majority” belong to the class of planets sized between Earth and Neptune. Indeed, the team identified a group of 11 potential Hycean planets orbiting red dwarf stars near Earth—within 35 and 150 light years—worthy of observation. In addition to widening the pool of potential candidates, the team argues it could be easier to look for the telltale chemical signs of life on Hycean planets given today’s tools. One such tool, the James Webb Space Telescope (JWST), Hubble’s successor, was packed up for shipping to its launch site this week. The JWST is scheduled to launch later this year. In addition to peering deep into the early universe, the telescope will scan the atmospheres of promising exoplanets. To detect the byproducts (or biosignatures) of living creatures, scientists will analyze the spectrum of starlight shining through their atmospheres. For Earth-like planets, where the atmosphere hugs the surface fairly closely, this may be relatively difficult and require data gathering from multiple orbits of—and transits across—a candidate’s host star. Hycean planets, on the other hand, are bigger and have atmospheres extending higher above the surface—allowing for more sunlight to pass through them. This may make them more eff...

Aug 29

5 min 25 sec

Much of the recent progress in AI has come from building ever-larger neural networks. A new chip powerful enough to handle “brain-scale” models could turbo-charge this approach. Chip startup Cerebras leaped into the limelight in 2019 when it came out of stealth to reveal a 1.2-trillion-transistor chip. The size of a dinner plate, the chip is called the Wafer Scale Engine and was the world’s largest computer chip. Earlier this year Cerebras unveiled the Wafer Scale Engine 2 (WSE-2), which more than doubled the number of transistors to 2.6 trillion. Now the company has outlined a series of innovations that mean its latest chip can train a neural network with up to 120 trillion parameters. For reference, OpenAI’s revolutionary GPT-3 language model contains 175 billion parameters. The largest neural network to date, which was trained by Google, had 1.6 trillion. “Larger networks, such as GPT-3, have already transformed the natural language processing landscape, making possible what was previously unimaginable,” said Cerebras CEO and co-founder Andrew Feldman in a press release. “The industry is moving past 1 trillion parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters.” The genius of Cerebras’ approach is that rather than taking a silicon wafer and splitting it up to make hundreds of smaller chips, it makes a single massive one. While your average GPU will have a few hundred cores, the WSE-2 has 850,000. Because they’re all on the same hunk of silicon, they can work together far more seamlessly. This makes the chip ideal for tasks that require huge numbers of operations to happen in parallel, which includes both deep learning and various supercomputing applications. And earlier this week at the Hotchips conference, the company unveiled new technology that is pushing the WSE-2’s capabilities even further. A major challenge for large neural networks is shuttling around all the data involved in their calculations. Most chips have a limited amount of memory on-chip, and every time data has to be shuffled in and out it creates a bottleneck, which limits the practical size of networks. The WSE-2 already has an enormous 40 gigabytes of on-chip memory, which means it can hold even the largest of today’s networks. But the company has also built an external unit called MemoryX that provides up to 2.4 Petabytes of high-performance memory, which is so tightly integrated it behaves as if it were on-chip. Cerebras has also revamped its approach to that data it shuffles around. Previously the guts of the neural network would be stored on the chip, and only the training data would be fed in. Now, though, the weights of the connections between the network’s neurons are kept in the MemoryX unit and streamed in during training. By combining these two innovations, the company says, they can train networks two orders of magnitude larger than anything that exists today. Other advances announced at the same time include the ability to run extremely sparse (and therefore efficient) neural networks, and a new communication system dubbed SwarmX that makes it possible to link up to 192 chips to create a combined total of 163 million cores. How much all this cutting-edge technology will cost and who is in a position to take advantage of it is unclear. “This is highly specialized stuff,” Mike Demler, a senior analyst with the Linley Group, told Wired. “It only makes sense for training the very largest models.” While the size of AI models has been increasing rapidly, it’s likely to be years before anyone can push the WSE-2 to its limits. And despite the insinuations in Cerebras’ press material, just because the parameter count roughly matches the number of synapses in the brain, that doesn’t mean the new chip will be able to run models anywhere close to its complexity or performance. There’s a major debate in AI circles today over whether we can achieve general ar...

Aug 27

4 min 51 sec

Imagine if you could power your kettle using the energy generated from the vegetable cuttings quietly breaking down in your kitchen’s compost bin. That reality might not be so far off with the growth of biogas technology. Biogas is a green alternative to fossil fuels that not only helps to reduce toxic emissions but also provides cheap, clean energy. It’s made up of a mixture of methane, carbon dioxide and a little hydrogen sulfide and water vapor, all of which is produced by microbes that live on organic raw material within a airtight digester container. The efficiency of the system depends on the size and insulation capability of the digester, as well as the quantity of methane produced from the “feedstock,” which can be anything from carrot leaves and onion peels to residue from gardening. Biogas is “green” because it reduces the release of greenhouse gases into the atmosphere from decomposing food waste. Instead, these gases are stored and used for generating heat and electricity, making the energy produced from waste more sustainable. Yet although biogas has been promoted as a way of helping to reduce carbon emissions for a few years now—and has actually been used to power households since as early as the 10th century BC (for heating bath water in the Middle East)—it still only represented around 0.004 percent of total EU gas consumption in 2019. So why is uptake so low, and what can be done about it? Digesters in Practice Micro-digesters (between two to ten cubic meters) can power individual household systems for up to 12 hours per day, while large digesters of 50 cubic meters can be linked to the local gas grid to support communities for up to 250 hours. The diagram below shows how these systems usually work: the pipe at the top of the image would normally lead to a community gas tank or household appliance. An example of an innovative small-scale biogas system is the Methanogen micro-digester. There is one running at Calthorpe community garden, a multi-functional urban community center in Islington, London. The unit sits in a repurposed shed next to a vegetable garden. The energy, generated from food and garden waste from the surrounding houses, is supplied to the center’s kitchen hob through a pipe. The digester is run by community volunteers, whose mission is to improve the physical and emotional well-being of residents living in the center’s surrounding areas by encouraging them to grow food and spend more time in nature. An even more ambitious initiative is running on the Swedish island of Gotland, where an eco-village, Suderbyn, has been created using zero-carbon materials. A community-run digester was set up to create heat using food and agricultural waste from the community. Inspired by Suderbyn’s success, similar sites have been launched in the UK at Hockerton, near Nottingham, and in Grimsby. The Uptake Problem But why are more digesters not springing up? Our research set out to understand the challenges responsible for the slow uptake of this technology. To better understand people’s attitudes towards biogas, we carried out a study of community biogas generation in Europe. Our research, conducted through interviews and consultation workshops, found that one of the barriers stopping biogas use was prejudice arising from poor public understanding of the technology and its benefits. People we spoke to were concerned that local digesters would produce a nasty smell, or that their industrial appearance would blight the landscape. In fact, many digesters are fairly small, and would only produce smells if the system broke down. Other stumbling blocks include a lack of technical expertise in building or maintaining digesters, a lack of incentives to attract local businesses, and the high cost of the digester, which, depending on its size, can cost between £12,000 and £158,000. Because of this, local government assistance will be crucial in bringing biogas to the masses. They should help shoulder the financial cost...

Aug 26

5 min 28 sec

In 2018, GE unveiled its Haliade-X turbine, and it has since been the largest and most powerful offshore wind turbine in the world. At 853 feet tall and with a rotor measuring 722 feet across, a single rotation of its blades can power a home for two days (that’s a home in the UK, not the US; homes here tend to be bigger energy hogs). Last year, the Haliade-X prototype located in the Netherlands set a new world record by generating 312 megawatt-hours of continuous power in one day. But the record-setting turbine is about to get dethroned by a new, even bigger and more powerful arrival. China’s MingYang Smart Energy Group this week announced development of its MySE 16.0-242, a 16-megawatt turbine that can reportedly power 20,000 homes. Standing 866 feet tall, the turbine only has a few feet on the Haliade-X’s height, but its rotor is the differentiator at 794 feet across. Each blade is 387 feet long, and their rotation will sweep an area bigger than six soccer fields. Let’s put some more visuals to those numbers. 866 feet is taller than the 70-story GE building in New York’s Rockefeller Center. An American football field is 360 feet long, so imagine a blade that’s even longer, and a rotor taller than the Golden Gate Bridge. It’s hard to wrap your head around, especially when you consider that these gigantic pieces will be assembled into one unit in the middle of the ocean, then work together to produce clean energy. The company says the turbine can be anchored to the ocean floor or installed on a floating base (the proportions of which are equally hard to imagine). Putting a man-made structure of these dimensions with moving parts in the ocean must have some sort of impact on the surrounding marine life. However, not a ton of research has been done in this area, and as offshore wind becomes a more popular source of power, it’s probably a good idea to make sure we’re not wrecking entire ecosystems by plopping turbines into their midst. To that end, the UK’s Natural Environment Research Council launched a study this week called ECOWind. In partnership with The Crown Estate, which manages the seabed of England, Wales, and Northern Ireland, the project will collect and analyze data on offshore wind’s impact on marine ecosystems, and is scheduled to last four years. China will want to heed the findings; it’s been the world leader in new offshore wind installations for three years running, and installed more than half the world’s offshore wind capacity last year. As demand for renewable energy sources grows, offshore wind will continue to be scaled up, both in terms of the number of turbines installed and the power generation capacity of the turbines. MingYang’s new turbine can reportedly withstand typhoon-force winds. Founded in 2006, MingYang is a public company whose stock trades on the Shanghai exchange. Earlier this year, the company secured a contract to provide 10 turbines (of an earlier model than the 16.0-242) for the Taranto offshore wind park off the Italian coast. It will be the first offshore wind farm in the Mediterranean, and is MingYang’s first European deal. A prototype of the MySE 16.0-242 will be built in 2022, with commercial production of the turbine scheduled to start in early 2024. Image Credit: MingYang Smart Energy Group Co. Ltd.

Aug 25

3 min 48 sec

Nature hides astonishing medical breakthroughs. Take CRISPR, the transformative gene editing tool. It was inspired by a lowly bacterial immune defense system and co-opted to edit our genes to treat inherited diseases, bolster cancer treatments, or even extend lifespan. Now, Dr. Feng Zhang, one of the pioneers of CRISPR, is back with another creation that could unleash the next generation of gene therapy and RNA vaccines. Only this time, his team looked deep inside our own bodies. Powerful as they are, DNA and RNA therapeutics need to hitch a ride into our cells to work. Scientists usually call on viral vectors—delivery vehicles made from safe viruses—or lipid nanoparticles, little blobs of protective fat, to encapsulate new genetic material and tunnel into cells. The problem? Our bodies aren’t big fans of foreign substances—particularly ones that trigger an undesirable immune response. What’s more, these delivery systems aren’t great with biological zip codes, often swarming the entire body instead of focusing on the treatment area. These “delivery problems” are half the battle for effective genetic medicine with few side effects. “The biomedical community has been developing powerful molecular therapeutics, but delivering them to cells in a precise and efficient way is challenging,” said Zhang at the Broad Institute, the McGovern Institute, and MIT. Enter SEND. The new delivery platform, described in Science, dazzles with its sheer ingenuity. Rather than relying on foreign carriers, SEND (selective endogenous encapsidation for cellular delivery) commandeers human proteins to make delivery vehicles that shuttle in new genetic elements. In a series of tests, the team embedded RNA cargo and CRISPR components inside cultured cells in a dish. The cells, acting as packing factories, used human proteins to encapsulate the genetic material, forming tiny balloon-like vessels that can be collected as a treatment. Even weirder, the source of these proteins relies on viral genes domesticated eons ago by our own genome through evolution. Because the proteins are essentially human, they’re unlikely to trigger our immune system. Although the authors only tried one packaging system, far more are hidden in our genomes. “That’s what’s so exciting,” said study author Dr. Michael Segel, adding that the system they used isn’t unique; “There are probably other RNA transfer systems in the human body that can also be harnessed for therapeutic purposes.” The Body’s Shipping Infrastructure Our cells are massive chatterboxes. And they’ve got multiple phone lines. Electricity is a popular one. It’s partly what keeps neurons hooked up into networks and heart cells in sync. Hormones are another, linking up cells from halfway around the body through chemicals in the bloodstream. But the strangest comes from an age-old truce between human and virus. Scouring the human genome today, it’s clear we have viral DNA and other genetic elements embedded inside our own double helices. Most of these viral additions have lost their original functions. Some, however, have been recruited to build our bodies and minds. Take Arc, a protein made from a gene otherwise known as gag—a core viral gene that’s common in our genomes. Arc is a memory grandmaster: as we learn, the protein forms tiny capsules that transfer biological material, which in turn helps to cement new memories into our neural network repertoire. Another protein similar to gag, dubbed PEG10, can grab onto RNA and also form bubbly spaceships to help develop the placenta and aid reproduction. If PEG10 makes the cardboard packaging for genetic material, then the mail stamp comes from another viral gene family, fusogens. The gene creates a zip code of sorts, allowing each spaceship, carrying the cargo, to dock onto targeted cells. Although originally viral in nature, these genes have immigrated into our genomes and adapted into an amazingly specific transportation system that allows cells to share information...

Aug 24

8 min 33 sec

Today’s brain implants are bulky and can typically only record from one or two locations. Now researchers have shown that a network of tiny “neurograins” can be used to wirelessly record and stimulate neurons in multiple locations in rat brains. Researchers have been experimenting with brain computer interfaces (BCIs) that can record and stimulate groups of neurons for decades. But in recent years there has been growing interest in using them to treat diseases like epilepsy, Parkinson’s, or various psychiatric disorders. More speculatively, some think they could soon be implanted in healthy people to help us monitor our brain function and even boost it. Last year, Elon Musk said brain implants being built by his startup Neuralink will one day be like “a Fitbit in your skull.” First, though, they will have to get much more accurate and far less obtrusive. New research led by a team at Brown University has made significant strides on that latter problem by developing tiny implants measuring less than 0.1 cubic millimeters. The implants can both record and stimulate brain activity; these “neurograins” can be combined to create a network of implants that can be controlled and powered wirelessly. “One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible,” Arto Nurmikko, who led the research, said in a press release. “Up to now, most BCIs have been monolithic devices—a bit like little beds of needles. Our team’s idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex.” Each of the tiny chips features electrodes to pick up electrical signals from brain tissue, circuitry to amplify the signal, and a tiny coil of wire that sends and receives wireless signals. The chips are attached to the surface of the brain and a thin relay coil that helps improve wireless power transfer to the neurograins is laid over the area where they are placed. A thin patch containing another coil is then stuck to the outside of the scalp above the relay coil. This acts like a mini cellphone tower, using a specially-designed network protocol to connect to each of the neurograins individually. It also transmits power to the neurograins. The concept is similar to the “neural dust” developed at University of California, Berkley, which has since been spun out by startup Iota Biosciences, though their neurograins are an order of magnitude smaller. In a paper in Nature Electronics, the team showed that they could implant 48 of the tiny chips into a rat’s brain and use it to record and stimulate neural activity. While ultimately both capabilities will be integrated into one device, for the purpose of the study some neurograins were designed to record while others were built to stimulate. The researchers say the fidelity of the recordings has room for improvement, but they were able to pick up spontaneous brain signals and detect when the brain was stimulated using a conventional implant. They also showed they could direct a single neurograin to stimulate neural activity, which they were able to pick up with conventional recording devices. The team says their current setup could support up to 770 of the neurograins, but they envision scaling to thousands. That would likely require further miniaturization, but the paper notes that the chip design should translate from the 65 nanometer fabrication process it currently uses to a 22 nanometer one. The same group has also developed a novel method for implanting large numbers of tiny wireless sensors into soft tissue. Plenty of work still needs to be done to improve the quality of the recordings the implants are capable of, and to verify their safety in humans. But the ability to coordinate many small implants into a network is an interesting advance, with plenty of potential for both research and medicine. Image Credit: Image by Raman Oza from Pixabay

Aug 23

4 min 8 sec

Swiss researchers at the University of Applied Sciences Graubünden this week claimed a new world record for calculating the number of digits of pi—a staggering 62.8 trillion figures. By my estimate, if these digits were printed out they would fill every book in the British Library ten times over. The researchers’ feat of arithmetic took 108 days and 9 hours to complete, and dwarfs the previous record of 50 trillion figures set in January 2020. But why do we care? The mathematical constant pi (π) is the ratio of a circle’s circumference to its diameter, and is approximately 3.1415926536. With only these ten decimal places, we could calculate the circumference of Earth to a precision of less than a millimeter. With 32 decimal places, we could calculate the circumference of our Milky Way galaxy to the precision of the width of a hydrogen atom. And with only 65 decimal places, we would know the size of the observable universe to within a Planck length—the shortest possible measurable distance. What use, then, are the other 62.79 trillion digits? While the short answer is that they are not scientifically useful at all, mathematicians and computer scientists will be eagerly awaiting the details of this gargantuan computation for a variety of reasons. What Makes Pi So Fascinating? The concept of pi is simple enough for a primary school student to grasp, yet its digits are notoriously difficult to calculate. A number like 1/7 needs infinitely many decimals to write down—0.1428571428571.—but the numbers repeat themselves every six places, making it easy to understand. Pi, on the other hand, is an example of an irrational number, in which there are no repeating patterns. Not only is pi irrational, but it is also transcendental, meaning it cannot be defined through any simple equation featuring whole numbers. Mathematicians around the world have been computing pi since ancient times, but techniques to do so changed dramatically after the 17th century, with the development of calculus and the techniques of infinite series. For example, the Madhava series (named after the Indian-Hindu mathematician Madhava of Sangamagrama), says: π = 4(1 – 1/3 + 1/5 – 1/7 + 1/9 – 1/11 + .) By adding more and more terms, this computation gets closer and closer to the true value of pi. But it takes a long time—after 500,000 terms, it produces only five correct decimal places of pi! The search for new formulae for pi adds to our mathematical understanding of the number, while also letting mathematicians vie for bragging rights in the quest for more digits. The infinite sum used in the 2020 record-breaking effort was discovered in 1988 and can calculate 14 new digits of pi for each new term that is added to the sum. While breaking the record may be one of the key motivators for finding new digits of pi, there are two other important benefits. The first is the development and testing of supercomputers and new high-precision multiplication algorithms. Optimizing the computation of pi leads to computer hardware and software that benefit many other areas of our lives, from accurate weather forecasting to DNA sequencing and even COVID modeling. The latest computation of pi was 3.5 times as fast as the previous effort, despite the extra 12 trillion decimal places—an impressive increase in supercomputing performance in just 18 months. The second is the exploration of the very nature of pi. Despite centuries of research, there are still fundamental unanswered questions about the way its digits behave. It is conjectured that pi is a “normal” number, meaning all possible sequences of digits should appear equally often. For example, we expect the digit 3 to appear as often as the digit 8, and the digit string “12345” to appear as often as “99999.” But we don’t even know if each decimal digit appears infinitely often in pi, let alone whether there are more complex patterns waiting to be discovered. The data for the new pi computation have not yet been released, as the ...

Aug 22

5 min 29 sec

Before 2020, many of us had never heard of mRNA. With the development of Covid-19 vaccines dependent on this molecule, though, it was all over the news. Covid was the first disease mRNA therapeutics tackled, and given the success of the Pfizer and Moderna vaccines at preventing severe cases of the virus, it won’t be the last. New candidates are lining up, with scientists saying mRNA could make it possible to develop vaccines against diseases that, until now, haven’t had solutions in sight. One of these is HIV; Moderna (whose name, by the way, comes from “modified RNA”) launched trials of its experimental mRNA-based HIV vaccine, called mRNA-1644, this week. Phase 1 The Phase 1 trial will consist of giving the vaccine to 56 adults who don’t have HIV, with the primary goals being to evaluate its safety and monitor the development of an immune response in participants. In addition to the initial version of the vaccine, Moderna also developed a variant called mRNA-1644-v2-Core (catchy, isn’t it?). As detailed in Moderna’s August 11 submission to the to the National Institutes of Health’s Clinical Trials registry, participants in the trial will be split into four different groups, with one group getting mRNA-1644, a second group getting mRNA-1644-v2, and the remaining two groups getting a mix of both versions. Rather than a blind trial, where people don’t know which injection they’re receiving, participants will be informed of what they’re getting. The Phase 1 trial is scheduled to take around 10 months. Later-stage trials will likely take much longer than Covid-19 trials did; as Covid spread like wildfire in 2019 and 2020, getting hundreds of thousands sick, it was much easier to give people a vaccine and quickly see who became infected and who didn’t. HIV is, thankfully, far less prevalent, and you can go a lifetime without ever coming into contact with the virus. mRNA 101 As you’ve probably heard by now through reading up on the Covid vaccines, mRNA-based versions work a little differently than traditional vaccines, which used a weakened piece of virus to expose our bodies to it. As detailed in an excellent, very-worth-listening-to ‘Gamechangers’ podcast from The Economist, mRNA vaccines are intended to train our cells to create proteins to fight viruses. mRNA is the intermediary between DNA and proteins, and proteins control pretty much everything that happens in our cells. DNA makes mRNA, which in turn acts as a “messenger” and instructs our cells to make proteins. The “workshop” where the proteins get made is the cell’s ribosome. “This is fundamentally the idea behind RNA therapeutics,” said Natasha Loder, health policy editor at The Economist. “It’s about taking control of that essentially manipulating these messengers.” One of the biggest obstacles scientists had to solve was getting modified RNA into cells without it triggering an immune response. “Part of the mRNA molecule was alerting the immune system, and just by tweaking the structure of one of those molecules, it.was easier for the molecule to sneak in without being recognized,” said Loder. Scientists at the University of Pennsylvania were able to create mRNA that could get past cells’ defenses, but still be recognized by the ribosome. For the Covid vaccine, this entailed getting the ribosome to start cranking out that spike protein. 24 to 48 hours after getting a shot of the vaccine, the recipient’s cells start to manufacture the spike protein. The body tags it as an invader and launches an immune response. Then, when the person comes in contact with the real virus, their cells are already prepared to fight the infection before it takes over. The HIV virus is a bit more complicated. It creates new strains at a rapid rate, meaning that a vaccine targeting a single surface protein wouldn’t work. Instead, this vaccine’s aim will be to generate broadly neutralizing antibodies (bnAbs) that are effective against many variants. A New Frontier Much of the vacc...

Aug 20

5 min 53 sec

Vaccine and drug development, artificial intelligence, transport and logistics, climate science—these are all areas that stand to be transformed by the development of a full-scale quantum computer. And there has been explosive growth in quantum computing investment over the past decade. Yet current quantum processors are relatively small in scale, with fewer than 100 qubits— the basic building blocks of a quantum computer. Bits are the smallest unit of information in computing, and the term qubits stems from “quantum bits.” While early quantum processors have been crucial for demonstrating the potential of quantum computing, realizing globally significant applications will likely require processors with upwards of a million qubits. Our new research tackles a core problem at the heart of scaling up quantum computers: how do we go from controlling just a few qubits, to controlling millions? In research published today in Science Advances, we reveal a new technology that may offer a solution. What Exactly Is a Quantum Computer? Quantum computers use qubits to hold and process quantum information. Unlike the bits of information in classical computers, qubits make use of the quantum properties of nature, known as “superposition” and “entanglement,” to perform some calculations much faster than their classical counterparts. Unlike a classical bit, which is represented by either 0 or 1, a qubit can exist in two states (that is, 0 and 1) at the same time. This is what we refer to as a superposition state. Demonstrations by Google and others have shown even current, early-stage quantum computers can outperform the most powerful supercomputers on the planet for a highly specialized (albeit not particularly useful) task—reaching a milestone we call quantum supremacy. Google’s quantum computer, built from superconducting electrical circuits, had just 53 qubits and was cooled to a temperature close to -273℃ in a high-tech refrigerator. This extreme temperature is needed to remove heat, which can introduce errors to the fragile qubits. While such demonstrations are important, the challenge now is to build quantum processors with many more qubits. Major efforts are underway at UNSW Sydney to make quantum computers from the same material used in everyday computer chips: silicon. A conventional silicon chip is thumbnail-sized and packs in several billion bits, so the prospect of using this technology to build a quantum computer is compelling. The Control Problem In silicon quantum processors, information is stored in individual electrons, which are trapped beneath small electrodes at the chip’s surface. Specifically, the qubit is coded into the electron’s spin. It can be pictured as a small compass inside the electron. The needle of the compass can point north or south, which represents the 0 and 1 states. To set a qubit in a superposition state (both 0 and 1), an operation that occurs in all quantum computations, a control signal must be directed to the desired qubit. For qubits in silicon, this control signal is in the form of a microwave field, much like the ones used to carry phone calls over a 5G network. The microwaves interact with the electron and cause its spin (compass needle) to rotate. Currently, each qubit requires its own microwave control field. It is delivered to the quantum chip through a cable running from room temperature down to the bottom of the refrigerator at close to -273℃. Each cable brings heat with it, which must be removed before it reaches the quantum processor. At around 50 qubits, which is state-of-the-art today, this is difficult but manageable. Current refrigerator technology can cope with the cable heat load. However, it represents a huge hurdle if we’re to use systems with a million qubits or more. The Solution Is ‘Global’ Control An elegant solution to the challenge of how to deliver control signals to millions of spin qubits was proposed in the late 1990s. The idea of “global control” was simple: broadca...

Aug 19

6 min 39 sec

At the end of 2020, Boston Dynamics released a spirits-lifting, can’t-watch-without-smiling video of its robots doing a coordinated dance routine. Atlas, Spot, and Handle had some pretty sweet moves, though if we’re being honest, Atlas was the one (or, in this case, two) that really stole the show. A new video released yesterday has the bipedal humanoid robot stealing the show again, albeit in a way that probably won’t make you giggle as much. Two Atlases navigate a parkour course, complete with leaping onto and between boxes of different heights, shimmying down a balance beam, and throwing synchronized back flips. The big question that may be on many viewers’ minds is whether the robots are truly navigating the course on their own—making real-time decisions about how high to jump or how far to extend a foot—or if they’re pre-programmed to execute each motion according to a detailed map of the course. As engineers explain in a second new video and accompanying blog post, it’s a combination of both. Atlas is equipped with RGB cameras and depth sensors to give it “vision,” providing input to its control system, which is run on three computers. In the dance video linked above and previous videos of Atlas doing parkour, the robot wasn’t sensing its environment and adapting its movements accordingly (though it did make in-the-moment adjustments to keep its balance). But in the new routine, the Boston Dynamics team says, they created template behaviors for Atlas. The robot can match these templates to its environment, adapting its motions based on what’s in front of it. The engineers had to find a balance between “long-term” goals for the robot—i.e., making it through the whole course—and “short-term” goals, like adjusting its footsteps and posture to keep from keeling over. The motions were refined through both computer simulations and robot testing. “Our control team has to create algorithms that can reason about the physical complexity of these machines to create a broad set of high energy and coordinated behavior,” said Atlas team lead Scott Kuindersma. “It’s really about creating behaviors at the limits of the robot’s capabilities and getting them all to work together in a flexible control system.” The limits of the robot’s capabilities were frequently reached while practicing the new parkour course, and getting a flawless recording took many tries. The explainer video includes bloopers of Atlas falling flat on its face—not to mention on its head, stomach, and back, as it under-rotates for flips, crosses its feet while running, and miscalculates the distance it needs to cover on jumps. I know it’s a robot, but you can’t help feeling sort of bad for it, especially when its feet miss the platform (by a lot) on a jump and its whole upper body comes crashing onto said platform, while its legs dangle toward the ground, in a move that would severely injure a human (and makes you wonder if Atlas survived with its hardware intact). Ultimately, Atlas is a research and development tool, not a product the company plans to sell commercially (which is probably good, because despite how cool it looks doing parkour, I for one would be more than a little wary if I came across this human-shaped hunk of electronics wandering around in public). “I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Kuindersma said. “But we’re still in the early days of creating that future.” Image Credit: Boston Dynamics

Aug 18

3 min 40 sec

It’s the dog days of summer. You bite down on a plump, chilled orange. Citrus juice explodes in your mouth in a refreshing, tingling burst. Ahh. And congratulations—you’ve just been vaccinated for the latest virus. That’s one of the goals of molecular farming, a vision to have plants synthesize medications and vaccines. Using genetic engineering and synthetic biology, scientists can introduce brand new biochemical pathways into plant cells—or even whole plants—essentially turning them into single-use bioreactors. The whole idea has a retro-futuristic science fiction vibe. First conceived of in 1986, molecular farming got its boost three decades later, when the FDA approved the first—and only—plant-derived therapeutic protein for humans to treat Gaucher disease, a genetic disorder that prevents people from breaking down fats. But to Drs. Hugues Fausther-Bovendo and Gary Kobinger at Université Laval, Quebec and Galveston National Laboratory, Texas, respectively, we’re just getting started. In a new perspective article published last week in Science, the duo argues that plants have long been an overlooked resource for biomanufacturing. Plants are cheap to grow and resist common forms of contamination that haunt other drug manufacturing processes, while being sustainable and environmentally friendly. The resulting therapeutic proteins or vaccines are often stored inside their seeds or other plant cell components, which can be easily dehydrated for storage—no ultra-cold freezers or sterile carriers required. They also work fast. In just three weeks, the Canadian company Medicago produced a candidate Covid-19 vaccine that mimics the outer layer of the virus to stimulate an immune response. The vaccine is now in late-stage clinical trials. Even wilder, plants themselves can be turned into edible medicines. Rather than insulin shots, people with diabetes could just eat a tomato. Instead of getting a flu jab, you could munch on an ear of fresh, sweet corn. The draw of molecular farming encouraged DARPA (Defense Advanced Research Projects Agency) to finance three massive facilities to optimize plant-made vaccines. And if we ever make it to Mars, plants are far easier to cultivate than setting up a whole pharmaceutical operation. “Molecular farming could have a considerable impact on both human and animal health,” the authors said. What’s the Current Alternative? Hijacking other lifeforms to make drugs isn’t new. Take the common yeast, a scientist’s favorite medium for genetic engineering and a brewer’s best friend. Using little circular “spaceships” that carry new genes, called vectors, scientist can create brand-new biochemical pathways into these critters. In one recent study, a Stanford team made 34 modifications to the yeast’s DNA to chemically assemble a molecule with widespread effects on human muscles, glands, and tissue. Other mediums for synthesizing drugs, antibodies, and vaccines have relied on a rainbow of hosts, from the exotic—insect cells—to the slightly more mundane, such as eggs. The flu vaccine, for example, is cultured in chicken eggs, which supports the growth of an attenuated version of the virus to help stimulate the immune system. An upcoming Covid-19 vaccine is doing the same. But if you’ve ever had the unfortunate experience of home brewing gone bad—beer, wine, kombucha, or otherwise—you’ll have a visceral feel of the dangers involved. Although using yeast or mammalian cells for biomanufacturing is the norm today, it’s a costly operation. Cells fill massive, rotating jugs inside strictly-controlled facilities. Operations are under constant threat of zoonotic pathogens—dangerous, disease-causing bugs that could waste a whole tank. A Plant-Based Alternative Using plants as replacement biofactories started with a simple calculation: they’re cheap and easy to grow. Plants only require three things: light, water, and soil. Add in fertilizer if you’re feeling fancy. Greenhouses, if needed, are still far more econom...

Aug 17

8 min 46 sec

Batteries and renewable energy are helping to decarbonize large swathes of the modern world, but they look less likely to help in areas like industrial heating, long-haul heavy transportation, and long-duration energy storage. Some are touting hydrogen as a potentially emissions-free alternative fuel that could fill the gap. The Infrastructure Investment and Jobs Act, which passed the Senate last week, features $8 billion earmarked to create four regional hydrogen hubs, as well as support for further research and development to accelerate clean hydrogen technology. However, building out a national hydrogen economy that is both competitive and clean will not be easy, say the authors of a commentary article in Joule. They outline the main challenges the effort faces and the key ingredients that will be required to support the production, transport, storage, and use of clean hydrogen. The world already produces 70 million metric tons of hydrogen every year, most of which is used to make petrochemicals and synthesize ammonia for fertilizer. It’s primarily derived from fossil fuels by subjecting methane to steam, high heat, and pressure to break it into hydrogen and carbon dioxide, and costs about $1 per kilogram. Today, that CO2 is simply released into the atmosphere, so this so-called “gray hydrogen” is not especially good for the environment. But there are proposals to capture the CO2 using carbon capture technology and store it deep underground. This so-called “blue carbon” costs 50 percent more, but is touted as a clean source of hydrogen. In reality, the authors note that only 70 to 80 percent of CO2 can be reliably captured. A cleaner alternative is to use renewable electricity to power electrolyzers that split water into hydrogen and oxygen, but today that “green hydrogen” costs three to four times as much as gray hydrogen. One less-developed but potentially promising approach is to heat methane in the absence of oxygen. This produces “turquoise hydrogen” and solid carbon as a byproduct. This black carbon can be sold for about $1 a kilogram, though the market for it is fairly small and would quickly be saturated if this became the primary method of hydrogen production. To compete with gray hydrogen and fossil fuels, all of these approaches would need to hit a cost of $1 per kilogram, say the authors. At that price, clean hydrogen would become viable to replace gray hydrogen in chemical production and gasoline for transportation applications. To break into industrial heating it would probably have to drop below $0.40 per kilogram, though. On top of that, we also need to find ways to transport and store huge amounts of hydrogen. This presents a major challenge, because the amount of energy in a given volume of hydrogen is a third of that in natural gas. That means either the pressure it’s stored at or the speed at which it’s pumped will need to be boosted three-fold, the authors say. Building a whole new network of high-pressure hydrogen pipelines and storage tanks would be a massive investment. Instead, we could rely on the existing natural gas and electricity infrastructure to transport the feedstock for making clean hydrogen—electricity and methane—to smaller local hydrogen generation facilities. According to the article’s authors, the US Geological Survey should be charged with scanning the country for underground caverns where large amounts of hydrogen could be stored. Also, more research and development should go into proposals to convert hydrogen into chemicals that are easier to store, such as ammonia, light alcohols, and metal hydrides. Some of the steps required to make this happen have already been taken. Earlier this year, the US Energy Secretary launched an initiative to bring the cost of clean hydrogen down to $1 per kilogram by the end of the decade. Beyond that, the government also needs to support technology demonstrators to help companies test out key parts of a future hydrogen infrastructure. Federal or ...

Aug 16

5 min 52 sec

In the beginning, computer programmers translated their desires into the language of machines. Now, those machines are becoming conversant in the language of their programmers. OpenAI’s newly released Codex, a machine learning algorithm that can parse everyday language and computer code, sits between these worlds. This week, in a blog post and demo, OpenAI showed off Codex’s skills. The algorithm can turn written prompts into computer code with, at times, impressive results. OpenAI believes Codex will prove a worthy sidekick for coders, accelerating their work. What Is Codex? Codex is a descendent of OpenAI’s GPT-3, a sprawling natural-language machine learning algorithm released last year. After digesting and analyzing billions of words, GPT-3 could write (sometimes eerily) passable text with nary but a simple prompt. But when OpenAI released GPT-3 to developers, they quickly learned it could do more. One fascinating discovery was that GPT-3 could write simple code from prompts. But it wasn’t very good at it, so the team decided to fine-tune the algorithm with coding in mind from the start. They took a version of GPT-3 and trained it on billions of lines of publicly available code, and Codex was born. According OpenAI, Codex is proficient in over a dozen computer languages, but it’s particularly good at Python and, of course, everyday language. These skills in hand, Codex can digest a prompt like, “Add this image of a rocket ship,” and spit out the code necessary to embed an image (provided by the programmer) on the screen. In a demo, the OpenAI team showed how to code a simple video game—from blank screen to playable—using naught but a series of chatty prompts. The demos are impressive, but Codex is not about to replace programmers. OpenAI researchers say Codex currently completes around 37 percent of requests. That’s an improvement on an earlier iteration called Copilot, a kind of autocomplete for coders released as a product on Github, which had a success rate of 27 percent. But it took some manual labor in the form of supervised learning with labeled data sets to get it there. (GPT-3, by contrast, was trained on unlabeled data.) So, there’s room for improvement. And like any demo, it’s difficult to predict how useful Codex will be in the real world. OpenAI acknowledges this is just a start, calling it a “taste of the future.” They expect it will get better over time, but part of their motivation for this week’s release was to get Codex into the hands of developers, a strategy that paid off richly for GPT-3. Even with improvements, OpenAI doesn’t see tools like this as a replacement for coders. Rather, they hope to speed up programming and remove some of the drudgery. In the live demo, for example, OpenAI CEO Sam Altman casually noted Codex completed a step in seconds that would have taken him a half hour back when he was programming. Further, writing good software isn’t only about the actual coding bit, they suggest. “Programming is really about having a dream,” OpenAI CTO Greg Brockman told Wired, “It’s about having this picture of what you want to build, understanding your user, asking yourself, ‘How ambitious should we make this thing, or should we get it done by the deadline?’i” Codex doesn’t come up with what to design or how to design it. It will need significant direction, oversight, and quality control for the foreseeable future. (Which is true of its sibling GPT-3 as well.) For now, these kinds of programs will be more like sidekicks as opposed to the story’s hero. They may also be a next step in the long-running evolution of computer languages. In a companion op-ed in TechCrunch, Brockman and Hadi Partovi, founder and CEO of, emphasize how far we’ve come from the days when a select few scientists laboriously programmed computers with punch cards and machine code. In a long progression, computer language has evolved from what suited machines to what suits us. “With Al-generated code, one can imagine a...

Aug 15

5 min 51 sec

Five years ago, a plane called the Solar Impulse 2 flew around the world without using any liquid fuel. As you might guess from the name, the plane was solar-powered. It wasn’t the fastest—it took about almost a year and a half to circumnavigate the globe, traveling 26,718 miles and stopping in 17 different cities. But it was a meaningful proof of concept, and a technological feat. Now Solar Impulse 2 has a successor, with equally ambitious plans on the horizon. Skydweller, as the new plane has been appropriately dubbed, relies on the same basic technology as Solar Impulse 2, but will be autonomous and able to fly continuously for up to 90 days. And by autonomous, I mean there won’t even be an option to have a pilot sit in the cockpit and direct the plane, because there is no cockpit. That frees up extra space that can be used for other purposes, though. Skydweller will be able to carry payloads of up to 800 pounds—and they’ll most likely consist of radar and camera equipment, as the US Navy is funding a demo of the aircraft as a surveillance tool for monitoring the whereabouts of ships. Talk about an (unrelenting) eye in the sky. But the fact that it will stay aloft for months on end is, of course, Skydweller’s main advantage. The solar aircraft is made by a Spanish-American aerospace startup called Skydweller Aero. Based in Oklahoma City, the company raised $32 million in its Series A funding round, led by Italian aerospace firm Leonardo. “For us, if you’re flying 90 days with one aircraft, that’s two takeoffs and landings versus . hundreds,” Skydweller Aero co-founder John Parkes told Aviation Today. “Being able to fly thousands of miles, persist over an area for 30-60 days and fly back is a differentiator. It’s a huge cost savings to the US government when you look at the whole cost of doing a lot of the national security missions that we have.” The plane will stay airborne thanks to 2,900 square feet of photovoltaic cells that will blanket its surface, generating up to 2 kilowatts of electricity. As a backup in case it’s cloudy for a few days in a row, the plane will also be equipped with hydrogen fuel cells (maybe they’re not as “extremely silly” as Elon Musk thinks). With a wingspan of 236 feet (that’s just a bit larger than Boeing’s 747, whose wingspan measures 224 feet), Skydweller will fly the friendly skies at altitudes of 30,000-45,000 feet. “There are certainly differentiated missions that Skydweller can do that no other aircraft can do, but the core of it really is doing things that we do today better, smarter, cheaper, more effectively,” Parkes said. “And that is communications—being a node in the sky, whether for the military and first responder market or for the telecom world.” The company plans to start testing its aircraft in autonomous takeoff, landing, and flight this year. Once those tests are complete, they’ll be followed by long-endurance testing, with the goal of hitting at least 90 continuous days of flight. Image Credit: Skydweller Aero

Aug 13

3 min 22 sec