What is it? A new 3D printer that can lay down several types of materials within the same platform could lead the way to custom-printed robots and other complex machines.
Why does it matter? Some 3D printers work by fusing together fine layers of metal powder with a heat source such as a laser. Other printing methods use plastic, aerosol sprays, or cured resins or conductive ink. Manufacturing a complex product might require all of the above, but moving a printed object from machine to machine is cumbersome and could introduce imprecise results. This prototype, developed by a team from the Georgia Institute of Technology and Singapore University of Technology and Design, rolls four printing methods into one, allowing its user to print a full machine in one place.
How does it work? The platform has nozzles for each of the printing techniques. “Each has its own software, lights for curing the materials, and a moving platform and robot arms that can pick up and place components,” according to Nature. “This allows the printheads to work together to build single layers with multiple materials.” For example, the team has printed a light-emitting diode, complete with its circuitry, while simultaneously printing a plastic case to enclose it.
What is it? A team of researchers has created an algorithm for simulating neural connections that it calls “a decisive step towards creating the technology to achieve simulations of brain-scale networks on future supercomputers of the exascale class.” The term “exascale” refers to computers that can perform quintillions of calculations per second, versus the quadrillions of calculations that the world’s fastest machines currently can process.
Why does it matter? Scientists are on a quest to create computers that can learn and solve problems the way humans do. But to create the artificial neural networks needed to achieve this, they first must improve their understanding of how our brain cells communicate. Considering that the human brain contains 100 billion neurons, it’s not been possible to simulate the full human neural network.
How does it work? Brain simulations normally require a massive amount of computing power. The new algorithm “allows larger parts of the human brain to be represented using the same amount of computer memory,” the team said in a news release. “Simultaneously, the new algorithm significantly speeds up brain simulations on existing supercomputers.”
What is it? This season’s must-have accessory: a wearable, portable brain scanner that can create a 3D map of brain function while its user is moving.
Why does it matter? Stationary machines used to measure neural activity, like functional magnetic resonance imaging (fMRI) scanners, require the patient to sit still — a problem for squirmy children and people with movement disorders such as Parkinson’s disease. A helmet with the same capabilities solves that problem, and opens up new types of research. “You can look at aspects of brain function involving spatial navigation, which is hard to do with a subject who is stationary,” Richard Bowtell, a physics professor at the University of Nottingham and co-author of a study in Nature, told IEEE Spectrum. “You can also look at more natural interactions between people when they are free to move.”
How does it work? The researchers 3D-printed helmets custom-shaped to the subjects’ heads and embedded them with sensors called optically pumped magnetometers. These can pick up the brain’s electrical currents via magnetic fields on the scalp that a computer can then turn into a visual representation of brain function. Special coils placed in the helmet cancelled out any interference from Earth’s magnetic fields. Using the helmet and then a traditional scanner for comparison, the team recorded the subjects’ brain activity while bouncing a ball or drinking from a mug. Results from each scanner were comparable.
Top image credit: The University of Nottingham.
What is it? MIT bioengineers have discovered a way to simulate human organ functions on a single chip that they can then use to test drugs.
Why does it matter? Drugs undergo safety and efficacy testing on animals before they get the green light to enter human trials and eventually the market. Animal testing raises humanitarian concerns for some, of course, but they have other drawbacks. They don’t always reveal side effects, for example, and a drug that works on an animal might not work on a human. Testing drugs on simulated organs in a lab could cut animals out of the process altogether, giving pharmaceutical companies enough information to know whether human trials can begin.
How does it work? Each system consists of millions of cells from human organs connected via tiny channels that allow fluid to move between them, just as blood and other fluids circulate through the human body. The researchers linked up to 10 organ types, including the brain, liver, lung, heart and kidneys. After delivering a drug to gastrointestinal tissue to simulate swallowing a pill, they were able to monitor its breakdown and distribution throughout the other “organs.”
What is it? Researchers used artificial intelligence and machine learning to discover thousands of previously unknown species of viruses.
Why does it matter? Viruses certainly can make us sick, but they also serve a host of beneficial purposes: in gene and cell therapy, cancer research and even pest control. We need to know as much about these agents as we can. But because they’re tiny and fast-evolving, they’re hard to pin down long enough to even classify them. Scientists have begun a laborious process of comparing DNA samples collected in the wild against the genetic sequences of known viruses, hoping to find matches identifying specific microbes. But as Nature points out, “that method often fails, because virologists cannot search for what they do not know.” Machine learning can detect emerging patterns in the code that humans might miss, and speed up the whole process to boot.
How does it work? A team led by Simon Roux, a computational biologist at the Department of Energy’s Joint Genome Institute, taught computers to recognize 805 genomic sequences within the Inoviridae family of viruses, of which fewer than 100 species were known at the time. After giving the machine-learning algorithm an additional 2,000 sequences, some of them from viruses and others from bacteria, the team fed it huge sets of genomic data. The result: The algorithm detected more than 10,000 viruses from the Inoviridae family.