Sleeker. Faster. More Intuitive” (The New York Times); “Welcome to a world where speed is everything” (Verizon FiOS); “Speed is God, and time is the devil” (chief of Hitachi’s portable-computer division). In “real” time, life speeds up until time itself seems to disappear—fast is never fast enough, everything has to be done now, instantly. To pause, delay, stop, slow down is to miss an opportunity and to give an edge to a competitor. Speed has become the measure of success—faster chips, faster computers, faster networks, faster connectivity, faster news, faster communications, faster transactions, faster deals, faster delivery, faster product cycles, faster brains, faster kids. Why are we so obsessed with speed, and why can’t we break its spell? The cult of speed is a modern phenomenon. In “The Futurist Manifesto” in 1909, Filippo Tommaso Marionetti declared, “We say that the splendor of the world has been enriched by a new beauty: the beauty of speed.” The worship of speed reflected and promoted a profound shift in cultural values that occurred with the advent of modernity and modernization. With the emergence of industrial capitalism, the primary values governing life became work, efficiency, utility, productivity, and competition. When Frederick Winslow Taylor took his stopwatch to the factory floor in the early 20th century to increase workers’ efficiency, he began a high-speed culture of surveillance so memorably depicted in Charlie Chaplin’s Modern Times. Then, as now, efficiency was measured by the maximization of rapid production through the programming of human behavior.
While the idea of cruising around in a 3D-printed car and munching on 3D-printed chocolate before returning to a 3D-printed home sure is nice, no industry is poised to benefit from this burgeoning technology in quite the way that medicine is. Replacing cancerous vertebra, delivering cancer-fighting drugs and assisting in spinal fusion surgery are just some of the examples we’ve covered here at Gizmag. The latest groundbreaking treatment involves an Indian cancer patient, who has had his upper jaw replaced with the help of 3D printing..
In today’s installment of “How 3D Printing is Changing Healthcare Forever,” a Massachusetts-based medical device company is forging new ground in knee replacement surgery. A combination of CT imaging, modeling software and 3D printing technology is enabling ConforMIS to offer implants tailored specifically to each patient. The development could help avoid complications that often follow the procedure, such as pain arising from instability of the joint. One of the most promising applications of 3D printing in medical fields is its ability to produce patient-specific devices. We have recently seen 3D-printed implants enable a teenager to walk again, substitute cancerous vertebra in the neck, enable customized spinal fusion surgery and replace upper and lower jaws. Knee replacement surgery is a procedure undertaken by around 700,000 people annually, according to the Center for Disease Control and Prevention. Issues that can arise range from minor blood loss and infections, to the threat of deep venous thrombosis. But the team at ComforMIS believes it can improve on traditional methods by steering away from generic, “off-the-shelf” implants to a more customizable solution. The company’s approach is much like others used in the production of 3D-printed implants. A CT scan is taken of the patient’s hip, knee and ankle, with the company’s specialized software converting the scan into an exact 3D model of the patient’s deteriorating knee. Using this model, personalized implants and instruments are made as one-off devices, produced, in part, by 3D printers.
“More recently and perhaps most importantly, it’s been found that people who learn a second language, even in adulthood, can better avoid cognitive decline in old age. In fact, when everything else is controlled for, bilinguals who come down with dementia and Alzheimer’s do so about four-and-a-half years later than monolinguals. Dr. Thomas Bak, a lecturer in the philosophy, psychology, and language sciences department at the University of Edinburgh, conducted the study and found that level of education and intelligence mattered less than learning a second language when it came to delaying cognitive decline. “It’s not the good memory that bilinguals have that is delaying cognitive decline,” Bak told me. “It’s their attention mechanism. Their ability to focus in on the details of language.”
Polyglots tend to be good at paying attention in a wide variety of ways, especially when performing visual tasks (like searching a scene or a list for a specific name or object) and when multitasking, which, according to Bak’s theory, is likely improved thanks to the practice of mentally switching between one’s native and foreign language while learning the foreign language.”—For a Better Brain, Learn Another Language - The Atlantic
““Cognitive traps,” or simple mistakes in spelling or comprehension that our brains tend to make when taking linguistic shortcuts (such as how you can easily read “tihs senetcne taht is trerilby msispleld”), are better avoided when one speaks multiple languages. Multi-linguals might also be better decision-makers. According to a new study, they are more resistant to conditioning and framing techniques, making them less likely to be swayed by such language in advertisements or political campaign speeches. Those who speak multiple languages have also been shown to be more self-aware spenders, viewing “hypothetical” and “real” money (the perceived difference between money on a credit card and money in cold, hard cash) more similarly than monolinguals.”—For a Better Brain, Learn Another Language - The Atlantic
A crop of books by disillusioned physicians reveals a corrosive doctor-patient relationship at the heart of our health-care crisis
..Ours is a technologically proficient but emotionally deficient and inconsistent medical system that is best at treating acute, not chronic, problems: for every instance of expert treatment, skilled surgery, or innovative problem-solving, there are countless cases of substandard care, overlooked diagnoses, bureaucratic bungling, and even outright antagonism between doctor and patient. For a system that invokes “patient-centered care” as a mantra, modern medicine is startlingly inattentive—at times actively indifferent—to patients’ needs.
To my surprise, I’ve now learned that patients aren’t alone in feeling that doctors are failing them. Behind the scenes, many doctors feel the same way. And now some of them are telling their side of the story. A recent crop of books offers a fascinating and disturbing ethnography of the opaque land of medicine, told by participant-observers wearing lab coats. What’s going on is more dysfunctional than I imagined in my worst moments. Although we’re all aware of pervasive health-care problems and the coming shortage of general practitioners, few of us have a clear idea of how truly disillusioned many doctors are with a system that has shifted profoundly over the past four decades. These inside accounts should be compulsory reading for doctors, patients, and legislators alike. They reveal a crisis rooted not just in rising costs but in the very meaning and structure of care. Even the most frustrated patient will come away with respect for how difficult doctors’ work is. She may also emerge, as I did, pledging (in vain) that she will never again go to a doctor or a hospital.
“I think mathematicians do mathematics for reasons that are very similar to those of musicians playing music or any artist doing their art. It’s all about trying to contribute to a certain understanding of ourselves and of the world around us.”—Princeton mathematician Manjul Bhargava, who has been awarded the 2014 Fields Medal, one of the most prestigious awards in mathematics. Read more about Bhargava and the award here and watch a video about him here. (via mathematica)
“Of all man’s instruments, the most wondrous, no doubt, is the book. The other instruments are extensions of his body. The microscope, the telescope, are extensions of his sight; the telephone is the extension of his voice; then we have the plow and the sword, extensions of the arm. But the book is something else altogether: the book is an extension of memory and imagination.”—Jorge Luis Borges (via observando)
“In July 1994 the equivalent of more than 200,000 megatons of TNT was deposited in Jupiter’s upper atmosphere as comet Shoemaker-Levy 9 slammed into the planet. If that kind of collision happens on Earth while humanity is present, it would very likely result in the abrupt extinction of our species.”—Neil deGrasse Tyson (via whats-out-there)
Geologists, climate scientists, ecologists – and a lawyer – gather in Berlin for talks on whether to rename age of human life
A disparate group of experts from around the world will meet for the first time on Thursday for talks on what must rank as one of the most momentous decisions in human history. The question confronting the scientists and other specialists is straightforward enough, even if the solution is far from simple. Is it time to call an end to the epoch we live in and declare the dawn of a new time period: one defined by humanity’s imprint on the planet? The 30-strong group, made up of geologists, climate scientists, ecologists – and a lawyer for good measure – will start their deliberations in a room at the Haus der Kulturen der Welt, or House of the Cultures of the World, a contemporary arts centre in Berlin. Like many things in the world of geology, little moves fast at the International Commission on Stratigraphy (ICS), the body that decides the time period we live in. But the arrival and informal adoption of the word “anthropocene” to mean a new epoch of humanity has somewhat forced their hand. The word came into common usage after Paul Crutzen, a Dutch chemist and Nobel prize winner, used the term in 2000. He arguedin an academic newsletter that the current geological epoch should be awarded the new name to reflect the major and ongoing impact of human life on Earth. The official arrival of the Anthropocene would mark the end of the Holocene, the geological time we live in now. Identified by a geochemical signal in Greenland ice cores that marks the onset of warmer and wetter conditions at the end of the last ice age, the Holocene defined a time when humans colonised new territories and the population swelled.
Scientists have treated a man they believe to be the first patient with internet addiction disorder brought on by overuse of Google Glass.
The man had been using the technology for around 18 hours a day – removing it only to sleep and wash – and complained of feeling irritable and argumentative without the device. In the two months since he bought the device, he had also begun experiencing his dreams as if viewed through the device’s small grey window. […]
The patient – a 31-year-old US navy serviceman – had checked into the Sarp in September 2013 for alcoholism treatment. The facility requires patients to steer clear of addictive behaviours for 35 days – no alcohol, drugs, or cigarettes – but it also takes away all electronic devices.
Doctors noticed the patient repeatedly tapped his right temple with his index finger. He said the movement was an involuntary mimic of the motion regularly used to switch on the heads-up display on his Google Glass.
Lev Landau, a Nobelist and one of the fathers of a great school of Soviet physics, had a logarithmic scale for ranking theorists, from 1 to 5. A physicist in the first class had ten times the impact of someone in the second class, and so on. He modestly ranked himself as 2.5 until late in life, when he became a 2. In the first class were Heisenberg, Bohr, and Dirac among a few others. Einstein was a 0.5! My friends in the humanities, or other areas of science like biology, are astonished and disturbed that physicists and mathematicians (substitute the polymathic von Neumann for Einstein) might think in this essentially hierarchical way. Apparently, differences in ability are not manifested so clearly in those fields. But I find Landau’s scheme appropriate: There are many physicists whose contributions I cannot imagine having made. I have even come to believe that Landau’s scale could, in principle, be extended well below Einstein’s 0.5. The genetic study of cognitive ability suggests that there exist today variations in human DNA which, if combined in an ideal fashion, could lead to individuals with intelligence that is qualitatively higher than has ever existed on Earth: Crudely speaking, IQs of order 1,000, if the scale were to continue to have meaning.
Many of us drink green tea for its wonderful health benefits, including proven antioxidant, antimicrobial, anti-aging and anti-cancer properties. Now, researchers in Singapore have taken its cancer-fighting properties to the next level, developing a green tea-based nanocarrier that encapsulates cancer-killing drugs. It is the first time green tea has been used to deliver drugs to cancer cells, with promising results. Animal studies show far more effective tumor reduction than use of the drug alone while significantly reducing the accumulation of drugs in other organs.
The new drug delivery system, developed at the Institute of Bioengineering and Nanotechnology (IBN) of A*STAR, uses epigallocatechin gallate (EGCG), a powerful antioxidant and catechin found in green tea and used therapeutically to treat cancer and other disorders.
"We have developed a green tea-based carrier in which the carrier itself displayed anti-cancer effect and can boost cancer treatment when used together with the protein drug," says Dr Motoichi Kurisawa, IBN Principal Research Scientist and Team Leader.
One of the main drawbacks of chemotherapy is that it also kills healthy cells in surrounding tissues and organs. Carriers allow more accurate treatment, acting like homing missiles that target diseased cells and release cancer-destroying drugs. However, the amount of the drug they can deliver is limited so more carriers need to be administered for treatment to be effective. Current carriers are made of materials that at best offer no therapeutic value and at worst may have adverse effects when used in large quantities, so the green tea-based carrier is an exciting development.
One of medicine’s greatest innovations in the 20th century was the development of antibiotics. It transformed our ability to combat disease. But medicine in the 21st century is rethinking its relationship with bacteria and concluding that, far from being uniformly bad for us, many of these organisms are actually essential for our health.
Nowhere is this more apparent than in the human gut, where the microbiome – the collection of bacteria living in the gastrointestinal tract – plays a complex and critical role in the health of its host. The microbiome interacts with and influences organ systems throughout the body, including, as research is revealing, the brain. This discovery has led to a surge of interest in potential gut-based treatments for neuropsychiatric disorders and a new class of studies investigating how the gut and its microbiome affect both healthy and diseased brains.
The microbiome consists of a startlingly massive number of organisms. Nobody knows exactly how many or what type of microbes there might be in and on our bodies, but estimates suggest there may be anywhere from three to 100 times more bacteria in the gut than cells in the human body. The Human Microbiome Project, co-ordinated by the US National Institutes of Health (NIH), seeks to create a comprehensive database of the bacteria residing throughout the gastrointestinal tract and to catalogue their properties.
The lives of the bacteria in our gut are intimately entwined with our immune, endocrine and nervous systems. The relationship goes both ways: the microbiome influences the function of these systems, which in turn alter the activity and composition of the bacterial community. We are starting to unravel this complexity and gain insight into how gut bacteria interface with the rest of the body and, in particular, how they affect the brain.
The Machiavellian intelligence hypothesis takes several forms, but all stem from the proposition that the advanced cognitive processes of primates are primarily adaptations to the special complexities of their social lives, rather than to nonsocial environmental problems such as finding food, which were traditionally thought to be the province of intelligence. The new “social” explanation for the evolution of INTELLIGENCE arose in the context of proliferating field studies of primate societies in the 1960s and 1970s. The paper generally recognized as pivotal in launching this wave of studies was Nicholas Humphrey’s “The Social Function of Intellect” (1976), the first to spell out the idea explicitly, although important insights were offered by earlier writers, notably Alison Jolly (see Whiten and Byrne 1988a for a review). By 1988 the idea had inspired sufficient interesting empirical work to produce the volume Machiavellian Intelligence (Byrne and Whiten 1988; now see also Whiten and Byrne 1997), which christened the area.
The Machiavellian intelligence hypothesis has been recognized as significant beyond the confines of primatology, however. On the one hand, it is relevant to all the various disciplines that study human cognitive processes. Because the basic architecture of these processes is derived from the legacy of our primate past, a more particular Machiavellian subhypothesis is that developments in specifically human intelligence were also most importantly shaped by social complexities. On the other hand, looking beyond primates, the hypothesis has been recognized as of relevance to any species of animal with sufficient social complexity.
Why “Machiavellian” intelligence? Humphrey talked of “the social function of intellect” and some authors refer to the “social intelligence hypothesis” (Kummer et al. 1997). But “social” is not really adequate as a label for the hypothesis. Many species are social (some living in much larger groups than primates) without being particularly intelligent; what is held to be special about primate societies is their complexity, which includes the formation of sometimes fluid and shifting alliances and coalitions. Within this context, primate social relationships have been characterized as manipulative and sometimes deceptive at sophisticated levels (Whiten and Byrne 1988b). Primates often act as if they were following the advice that Niccolo Machiavelli offered to sixteenth-century Italian prince-politicians to enable them to socially manipulate their competitors and subjects (Machiavelli 1532; de Waal 1982). “Machiavellian intelligence” therefore seemed an appropriate label, and it has since passed into common usage.
An important prediction of the hypothesis is that greater social intellect in some members of a community will exert selection pressures on others to show greater social expertise, so that over evolutionary time there will be an “arms race” of Machiavellian intelligence. Indeed, one of the questions the success of the hypothesis now begins to raise is why such escalation has not gone further than it has in many species.
“Hood goes on to trace how the self emerges in childhood and examines why this notion of the illusory self is among the hardest concepts to accept, contrasting the “ego theory” of the self, which holds that we are essential entities inside bodies, with Hume’s “bundle theory,” which constructs the self not as a single unified entity but as a bundle of sensations, perceptions, and thoughts lumped together. Neuroscience, Hood argues, only supports the latter. The Self Illusion tells the story of how that bundle forms and why it sticks together, revealing the brain’s own storytelling as the centripetal force of the self.”—The Self Illusion: How Our Social Brain Constructs Who We Are | Brain Pickings
“Each morning, we wake up and experience a rich explosion of consciousness — the bright morning sunlight, the smell of roast coffee and, for some of us, the warmth of the person lying next to us in bed. As the slumber recedes into the night, we awake to become who we are. The morning haze of dreams and oblivion disperses and lifts as recognition and recall bubble up the content of our memories into our consciousness. For the briefest of moments we are not sure who we are and then suddenly ‘I,’ the one that is awake, awakens. We gather our thoughts so that the ‘I’ who is conscious becomes the ‘me’ — the person with a past. The memories of the previous day return. The plans for the immediate future reformulate. The realization that we have things to get on with remind us that it is a workday. We become a person whom we recognize. The call of nature tells us it is time to visit the bathroom and en route we glance at the mirror. We take a moment to reflect. We look a little older, but we are still the same person who has looked in that same mirror every day since we moved in. We see our self in that mirror. This is who we are. The daily experience of the self is so familiar, and yet the brain science shows that this sense of the self is an illusion. Psychologist Susan Blackmore makes the point that the word ‘illusion’ does not mean that it does not exist — rather, an illusion is not what it seems. We all certainly experience some form of self, but what we experience is a powerful depiction generated by our brains for our own benefit.”—The Self Illusion: How Our Social Brain Constructs Who We Are | Brain Pickings
““Evolution doesn’t care about you past your reproductive age. It doesn’t want you either to live longer or to die, it just doesn’t care. From the standpoint of natural selection, an animal that has finished reproducing and performed the initial stage of raising young might as well be eaten by something, since any favorable genetic quality that expresses later in life cannot be passed along.” Because a mutation that favors long life cannot make an animal more likely to succeed at reproducing, selection pressure works only on the young.”—What Happens When We All Live to 100? - The Atlantic
The Congressional Budget Office estimates that over the next decade, all federal spending growth will come from entitlements—mainly Social Security and Medicare—and from interest on the national debt. The nonpartisan think tank Third Way has calculated that at the beginning of the Kennedy presidency, the federal government spent $2.50 on public investments—infrastructure, education, and research—for every $1 it spent on entitlements. By 2022, Third Way predicts, the government will spend $5 on entitlements for every $1 on public investments. Infrastructure, education, and research lead to economic growth; entitlement subsidies merely allow the nation to tread water.
“By constantly moving the flashlight of your attention to the perimeter of your understanding, you enlarge your sense of the world.”— Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (via inthenoosphere)
Beethoven is a singularity in the history of art—a phenomenon of dazzling and disconcerting force. He not only left his mark on all subsequent composers but also molded entire institutions. The professional orchestra arose, in large measure, as a vehicle for the incessant performance of Beethoven’s symphonies. The art of conducting emerged in his wake. The modern piano bears the imprint of his demand for a more resonant and flexible instrument. Recording technology evolved with Beethoven in mind: the first commercial 33⅓ r.p.m. LP, in 1931, contained the Fifth Symphony, and the duration of first-generation compact disks was fixed at seventy-five minutes so that the Ninth Symphony could unfurl without interruption. After Beethoven, the concert hall came to be seen not as a venue for diverse, meandering entertainments but as an austere memorial to artistic majesty. Listening underwent a fundamental change. To follow Beethoven’s dense, driving narratives, one had to lean forward and pay close attention. The musicians’ platform became the stage of an invisible drama, the temple of a sonic revelation.
“Although I am a typical loner in my daily life, my awareness of belonging to the invisible community of those who strive for truth, beauty, and justice has prevented me from feelings of isolation.”—Albert Einstein (via alteringminds)
“Counterculture giants of the time, like Stewart Brand, Buckminster Fuller and Ivan Illich, championed vernacular tools as a way to give people the personal autonomy and choices they craved. But the consumerist version of this ultimately vision prevailed, such that the decentralized empowerment that networked computers provided has been a mixed bag.”—Morozov on the Maker Movement | David Bollier (via johnborthwick)