828cloud

Data, Info and News of Life and Economy

Category Archives: Science

New Measures for Size, as World’s People Surpass 8 Billion

Thomas Adamson wrote . . . . . . . . .

What is bigger: A ronna or a quetta?

Scientists meeting outside of Paris on Friday — who have expanded the world’s measuring unit systems for the first time this century as the global population surges past 8 billion — have the answer.

Rapid scientific advances and vast worldwide data storage on the web, in smartphones and in the cloud mean that the very terms used to measure things in weight and size need extending too. And one British scientist led the push Friday to incorporate bold new, tongue-twisting prefixes on the gigantic and even the minuscule scale.

“Most people are familiar with prefixes like milli- as in milligram. But these are prefixes for the biggest and smallest levels ever measured,” Dr Richard Brown, head of Metrology at the U.K.’s National Physical Laboratory who proposed the four new prefixes, told The Associated Press.

“In the last 30 years, the datasphere has increased exponentially, and data scientists have realized they will no longer have words to describe the levels of storage. These terms are upcoming, the future,” he explained.

There’s the gargantuan “ronna” (that’s 27 zeros after the one) and its big brother the “quetta” – (that’s 30 zeros).

Their ant-sized counterparts are the “ronto” (27 zeros after the decimal point), and the “quecto” (with 30 zeros after the decimal point) — representing the smaller numbers needed for quantum science and particle physics.

Brown presented the new prefixes to officials from 64 nations attending the General Conference on Weights and Measures in Versailles, outside of Paris — who approved them on Friday.

The conference, which takes place every four years in France, is the supreme authority of the International Bureau of Weights and Measures. The new terms take effect immediately, marking the first time since 1991 that any new additions have been made.

Brown said the new terms also make it easier to describe things scientists already know about — reeling off a list of the smallest and biggest things discovered by humankind.

Did you know that the mass of an electron is one rontogram? And that a byte of data on a mobile increases the phone’s mass by one quectogram?

Further from home, the planet Jupiter is two just quettagrams in mass. While, incredibly, “the diameter of the entire observable universe is just one ronnameter,” Brown said.

He explained how the new names were not chosen at random: The first letter of the new prefixes had to be one not used in other prefixes and units.

“There were only the letters ‘r’ and ‘q’ that weren’t already taken. Following that, there’s a precedent that they sound similar to Greek letters and that big number prefixes end with an ‘a’ and smaller numbers with an ‘o,’” he added.

“It was high time. (We) need new words as things expand,” Brown said. “In just a few decades, the world has become a very different place.”


Source : AP

What A Global Flavor Map Can Tell Us About How We Pair Foods


Each node in this network denotes an ingredient, the color indicates food category, and node size reflects the ingredient prevalence in recipes. Two ingredients are connected if they share a significant number of flavor compounds, and link thickness representing the number of shared compounds between the two ingredients.

Nancy Shute wrote . . . . . . . . .

There’s a reason why Asian dishes often taste so different from the typical North American fare: North American recipes rely on flavors that are related, while East Asian cooks go for sharp contrasts.

That’s the word from researchers at the University of Cambridge, who used a tool called network analysis to chart the relationship between chemical flavor compounds. They did it to test the widely believed notion that foods with compatible flavors are chemically similar.

It turns out that’s true in some regional cuisines, particularly in North America – think milk, egg, butter, cocoa, and vanilla. But in East Asia, cooks are more likely to combine foods with few chemical similarities – from shrimp to lemon to ginger, soy sauce, and hot peppers.

The scientists used 56,498 recipes to test their questions about “the “rules’ that may underlie” recipes. (They mined Epicurious and Allrecipes as well as the Korean site Menupan.) They note that we rely on a very small number of recipes — around a one million — compared with all the possible food and flavor combinations available to us — more than a trillion by their estimates.

To illustrate their findings, the scientists decided to show, not just tell. The result: A stunning chart showing which foods are chemical cousins, and which are flavor outliers. Cucumber stands apart, while cheeses cluster in a clique, as do fish. Cumin connects to ginger and cinnamon, while tomato stands in a strange subgroup with chickpeas, mint, cardamom and thyme.


A lists the ingredients in two recipes, together with their flavor compounds. Each flavor compound is linked to the ingredients that contain it, forming a network. B shows the flavor network, whose nodes are ingredients, linked if they share at least one flavor compound.

The work appeared in Scientific Reports, an open-access journal from the Nature Publishing Group.

This network is beautiful enough to hang by the stove, and a big improvement on the usual eyeball-numbing illustrations in scientific journals.

But with people accustomed to eating food from around the world, those preferences may be shifting. A new survey declared salty caramel to be the hot North American flavor for 2012. Can garlic-soy caramels be far behind?


Source : npr

They Made a Material that Doesn’t Exist on Earth

Paddy Hirsch wrote . . . . . . . . .

It sounds like the plot of a science fiction movie: humans are destroying the Earth, gouging huge scars in its crust, and polluting the air and the ground as they mine and refine a key element essential for technological advance. One day, scientists examining an alien meteorite discover a unique metal that negates the need for all that excavation and pollution. Best of all, the metal can be replicated, in a laboratory, using base materials. The world is saved!

OK, we amped the story a wee bit there. No aliens, for one thing (unless you know something we don’t). But the rest of it is true. Two teams of scientists — one at Northeastern University in Boston; the other at the University of Cambridge in the UK — recently announced that they managed to manufacture, in a lab, a material that does not exist naturally on Earth. It — until now — has only been found in meteorites.

We spoke to Laura Henderson Lewis, one of the professors on the Northeastern team, and she told us the material found in the meteorites is a combination of two base metals, nickel and iron, which were cooled over millions of years as meteoroids and asteroids tumbled through space. That process created a unique compound with a particular set of characteristics that make it ideal for use in the high-end permanent magnets that are an essential component of a vast range of advanced machines, from electric vehicles to space shuttle turbines.

The compound is called tetrataenite, and the fact that scientists have found a way to make it in a lab is a huge deal. If synthetic tetrataenite works in industrial applications, it could make green energy technologies significantly cheaper. It could also roil the market in rare earths, currently dominated by China, and create a seismic shift in the industrial balance between China and the West.

Earthly, yet oh, so rare

As all of our readers will doubtless remember from their high school science classes, magnets are an essential component of any piece of machinery that runs on electricity: they are the conduit that transforms electric power into mechanical action.

Most magnets, like the magnet in the battery-powered clock on your office wall, for example, are pretty cheap and easy to produce. The permanent magnets that are used in advanced machinery, on the other hand, have to be able to resist tremendous pressures and temperatures for long periods of time. And to acquire those properties, they need a special ingredient: a rare earth.

Rare earths aren’t that rare. They’re elements that can be found all over the world. The difficult part is extracting them. For one thing, you have to dig them out of the ground. That’s hard enough. Then you have to separate them out: they’re usually combined with other elements or materials. Breaking these compounds down, and refining them to get the raw elements, is an expensive and messy business.

The China syndrome

The US used to be a leader in the rare earths world, but, in the 1980s, China found a huge deposit of these elements within its borders. Jonathan Hykawy is president of Stormcrow Capital, an investment firm that tracks rare earths markets. He has a good story about this discovery.

“A few Chinese companies opened mines in inner Mongolia and they were iron ore mines, and they were producing a waste material that ended up in their tailings piles,” Hykawy says. “The Japanese were buying large quantities of this iron, and they said, ‘Can we sample the waste piles?’ And the Chinese said, ‘Sure, take all you want.’ The Japanese came back a little while later and said, ‘We’d like to buy the waste.’ And the Chinese said, ‘Well, why wouldn’t we sell it to you? I mean, it’s waste. What are we going to do with it?’ Turns out it was rich in rare earths.”

The Chinese caught on pretty quickly, and began extracting these rare elements themselves. They could do it a lot more cheaply than anyone else, because their labor costs were a lot lower, and they were willing to put up with the environmental costs, which were not insignificant. Pretty soon, Hykawy says, US production ceased, and China effectively took over the market. Today, China controls more than 71% of the world’s extraction and 87% of the world’s processing capacity of rare earths.

Two of these rare earths, neodymium and praseodymium, are key components in the manufacturing of permanent magnets, which means that China now dominates the permanent magnet market, too, making more than 80 percent of these high-end instruments. A decade ago, this didn’t seem to be a problem. China was a willing and cooperative trading partner, apparently so unthreatening that in 2004 the US actually outsourced the production of magnets used in the guidance systems for American cruise missile and precision bombs to a Chinese company.

“We had US production,” Laura Lewis says. “Magnaquench, a subsidiary of General Motors. It was in Anderson, Indiana, and it went wholesale over to China. It was a short-term view of economics; profit up front, but then we lost our capabilities down the road.”

Today, relations with China are more fraught. And the need for both rare earths and permanent magnets is increasing, as we move to a clean-energy economy.

The US has awoken to the realization that it is at a significant strategic disadvantage to China in this vital area for its economy and national security. It has restarted an idled rare earths mine in California, and it is looking at potential new mining sites in Arizona, Nevada, and Wyoming. But those mines will take more than a decade to come online.

Game changer

This is why the discovery of synthetic tetrataenite is so exciting, Jonathan Hykawy says. The compound is so tough that manufacturers could make permanent magnets out of it for all but the most demanding pieces of machinery. If that happens, the US could fill a huge part of the magnet market itself, and reduce its need for certain rare earths. And it would make for a huge shift in America’s relationship with China. No longer would the US be beholden to a competitor for these key materials or dependent on them for certain parts essential for the production of vital technology.

There is a potential downside, however. Rare earths aren’t just used in the production of permanent magnets. They’re used in fiber optics, in radiation scanners, in televisions, in personal electronics. If a big part of the rare earths market disappears because of tetrataenite, Hykawy says, the production of all of these other important rare earths could be disrupted. They could become significantly more expensive to produce, which could drive up the cost of a range of consumer and industrial goods.

Far out

But it will be a long time before tetrataenite is in a position to disrupt any existing markets, Laura Lewis says. She says there is still a lot of testing to be done to find out whether lab tetrataenite is as hardy and as useful as the outer space material. And even if it turns out to be as good, it will be five to eight years “pedal to the metal” before anyone could make permanent magnets out of it.

In the meantime, China’s competitors are working hard to source rare earths of their own. The US is investing in mines in Australia; there’s exploration ongoing in Malaysia, and the Japanese are researching ways to extract elements from mud mined from the sea bed. Jonathan Hykawy says if countries are willing to invest in rare earth extraction, and tolerate the environmental implications, there’s no reason they can’t level the playing field with China.

“If we were willing to pay enough to produce these things, you can overcome those issues and you can produce these things in an environmentally responsible manner, ” he says. “This is no worse than mining and producing aluminum, for example.”


Source : npr

The Lonely Hearts Club Man

Ian Taylor wrote . . . . . . . . .

Back in 2008, a small but very cute study asked people to stand at the bottom of a hill, look up and guess how steep it was. Some people were there alone, others accompanied by friends. The hill, on the campus of the University of Virginia, had an incline of 26°. But to the people who were there with friends, it looked a lot less. Compared with those who turned up on their own, they significantly underestimated the gradient. The feel-good lesson? Everything looks easier when there’s a friend by your side.

Yes, mate, the benefits of friendship are profound. Having a strong social circle is associated with a longer life and fewer illnesses. Your pals lower your blood pressure and trigger positive chemicals in your brain. People with a strong social network are less stressed, more resilient and more optimistic. They’re more likely to be a healthy weight and less likely to suffer cognitive decline. They also enjoy some protection from cancer, heart disease and depression.

But there’s one group – a big one – that is missing out on these benefits. Men are lonely. Growing numbers of men are standing at the bottom of that hill, alone and overwhelmed, as surveys point to a recession of social connection among those of us with a Y chromosome.

A YouGov poll in 2019 concluded that one in five men have no close friends, twice as many as women. In 2021, the Survey Center on American Life found that since 1995, the number of American men reporting that they had no close friends jumped from 3 to 15 per cent. In the same research, the number of men saying they had at least six close friends halved from 55 per cent to 27 per cent.

Why do men struggle to make or maintain friendships? And what can we do about it, not just as individuals but on a societal level? Because the sad truth is, an empty social calendar is the least of Billy No-Mates’ problems. Loneliness is a health hazard, as dangerous as smoking or alcoholism, according to some research.

A major study by scientists at Brigham Young University in the US found that long-term social isolation can increase a person’s risk of premature death by as much as 32 per cent. For this reason, some have called it the ‘shadow pandemic’. It was brought into focus during the COVID-19 lockdowns, when all of us were isolated and friendship became a hot research topic again, but it had spread around the world long before the novel coronavirus had.

“It’s a story I’ve been telling for 30 years,” says Prof Niobe Way, of New York University. As a developmental psychologist, Way has spent much of her career interviewing boys and men about their relationships, and how they change over time (documented in her book, Deep Secrets). She believes that hyper-masculine ideals are stripping young men of close friendships and the intimacy that goes with them.

“When you speak to boys aged 11, 12 or 13, they have this natural capacity and desire for closeness. And it’s not a bromance thing, it’s not just wanting to have dudes to hang out with. It’s wanting someone they can share their secrets with,” she says. “Then you speak to them again around 15 or 16 and you get this stereotype creeping into the responses. They start saying things like, ‘Oh sure, I have friends, everyone’s my best friend, I don’t care, it doesn’t matter.’”

MACHO MAN?

Way admits that young men being macho about their friendships is nothing new, but she thinks it’s telling that a change occurs in adolescence that – seemingly – frames the way a lot of men form and maintain their relationships all the way through adulthood.

If you’ve ever watched a sitcom, you know how it goes: men have superficial or transactional relationships with each other and bond by banter as they watch sport or drink beer. Women, in contrast, have deep and emotionally vulnerable conversations marked by shared secrets and interpersonal closeness. The funny thing is, these sitcom stereotypes are borne out by research.

“One of the main things we’ve shown is that the two sexes are very different in their social style,” says Prof Robin Dunbar, an anthropologist at the University of Oxford whose work centres on social bonding. “The girls’ social world has been built around personalised relationships. It matters who you are, not what you are.

“For men, what makes the difference is investing time in doing something together. It might be meeting up for a pint or arranging to climb Ben Nevis. The activity is irrelevant as long as it’s a group activity – and that often doesn’t involve a lot of conversation. There’s a bit of banter but really, the content is close to zero.”

The difference between male and female friendship is often characterised as side-by-side versus face-to-face relationships. When men meet their friends, they stand shoulder-to-shoulder: at the bar, at the football ground, fishing at a river. When women meet up, they often sit across a table from each other and talk.

The emotional investment and frequent contact that women prize is not as important for men, Dunbar says. Men can go months without seeing a mate but still consider that person a close friend. Could this superficial approach to friendship explain why men are losing friends and more likely to feel lonely?

It’s almost certainly a factor, but it’s not the only one. Sociological and generational changes also play a part. It was only a few generations ago that, for the majority of people, friends were constants in our lives, like family. People moved less, travelled less, changed jobs less. Today, our mobility – literal and figurative – means that friendships can more easily come and go.

Loneliness and isolation can also happen as a consequence of other things, says Dr Mike Jestico, a psychologist at the University of Leeds who also works with local men’s groups in the city. “Homelessness, addiction, breakdown of family home… Men are more likely to experience these than women, leading to isolation,” he says.

“Isolation is more likely to happen to men with lower incomes, as social experiences tend to cost money. One of the men in my research sang in a social singing group. But when the group moved venues, he couldn’t afford the bus fare to travel, thus increasing his isolation.”

Jestico says that a kind of ‘structural’ isolation can also be a factor. Single men are more likely to live alone in high-rise tower blocks, for example, and are less likely to be the primary caregiver of children.

“The bedroom tax meant single men could not afford to live in accommodation with more than one bedroom and moved into smaller accommodation with some high-rise flats in Leeds having 75 to 80 per cent male residents in 2016. “One of my participants, who did not live with his children’s mother, was moved 15 miles from his two-bedroom flat to an affordable one-bedroom flat. This meant he lived further from his friends and children, who were much less likely to stay with him as he only had one bedroom.”

There’s more too. Throw in working from home, the closure of pubs, declining engagement in religious activities or social clubs, not to mention smartphone addiction and so-called social media, and perhaps the statistics on men’s shrinking friendship circles aren’t that surprising after all.

PLANS MAKE THE MAN

Another important factor is, of course, that men are a bit useless. When it comes to making plans or staying in contact with friends, men are socially lazy. This appears to be especially true in middle age when something strange happens with men’s friendships. At this age, men don’t appear to be lonely, on the surface.

“Data including men and women has often found a U-shaped relationship, where teenagers and the oldest people in society are the loneliest,” says John Ratcliffe, a researcher at the Centre of Loneliness Studies at Sheffield Hallam University. “That said, the highest suicide rates are in single men in their 40s and 50s.”

Men show a stronger link between marital status and loneliness than women, Ratcliffe says. Which is to say, unmarried women are less lonely than unmarried men. “I would link this statistical trend to a greater ‘reliance’ on partners for intimacy in men, and a greater ideation of the family role. For men who don’t have a partner, loneliness can be particularly severe.”

But even for men who are coupled up, middle age is tricky territory. At this stage of life, guys might drop out of the five-aside team, or family commitments keep them from the after-work drinks or the hobbies they once had more time for. They may have fewer peers in the workplace, and the friends they see on a regular basis may not be particularly close ones.

“Because males are socially lazy, what tends to happen is the wife ends up driving the social environment for the household,” says Dunbar. “The guys end up becoming friends with the partners of their wives’ friends – because they’re there.”

Men’s reliance on their partners can also lead to further problems. For one, it places a lot of pressure on the women (in heterosexual relationships, at least), and if the relationship breaks down or the man is widowed, it can leave him abruptly isolated. “When you have a divorce or you’re widowed, suddenly half your social world vanishes overnight,” Dunbar says.

So what’s to be done? Way says that it has to start with boys, addressing the culture of masculinity that young men grow up in.

“The lack of friendships amongst men is just a symptom of the bigger problem. I feel like journalists – and social scientists – bring the microscope in too much. And so we only focus on this specific symptom,” she says. “If you bring up the microscope just a tiny bit, you begin to see this is just a symptom. Because boys do have and want close friendships.”

Way believes we should try to foster boys’ latent caring and emotional side. Being socially and emotionally intelligent is not a female trait, she says: it’s a human one. “We don’t have to teach it, we just have to nurture it.”

Dunbar is more cautious about dismantling the way boys and men socialise, arguing that you see the same behaviours in monkeys and apes that you see at nurseries, schools and workplaces. He pictures two Mediterranean men sitting outside a cafe in the sunshine. They smoke cigarettes, drink coffee and stay there for hours saying almost nothing to one another.

“Don’t knock it!” he says. “This is boys bonding. Girls would never do that because they would want to talk to each other, but for boys you can sit down in complete silence and still build a relationship, providing there’s an activity or some kind of focus.”

For Dunbar, finding a shared activity is key, and his advice to lonely men is to start there, by finding a club or something you’re interested in. “Dancing, singing, playing rugby or tennis, climbing hills – you name it. They all trigger endorphins. And when you do it with other people, you end up bonding. It’s a very powerful mechanism,” he says.

Volunteering has a similar effect, whether it’s something charitable, or getting involved in your children’s sports teams, or local political or environmental movements. In 2020, Dunbar and his colleagues published a pan-European study in which they found that your future risk of depression is lower if you take part in three voluntary activities.

This taps into another stereotype about men: as much as we want to be loved, we also want to be useful. “In my research, a sense of ‘worth’ is often central to non-loneliness in men,” says Ratcliffe. “That is, feeling accepted, respected, loved, and/or admired. It also appeared related to neurological stimulation – the idea of being positively occupied.”

Ratcliffe believes that building self-worth in young boys and lonely men alike is important to undo the pandemic of disconnection. At the same time, he wants to deconstruct masculine expectations that say you have to be invulnerable, that compel men to say they’re okay when they’re not, or that they’re not lonely when they are.

Part of this is realising that you’re not alone in feeling alone, adds Way. “We have to normalise it so that people don’t somehow think they’re weird, but that it’s actually that culture has made it very hard for you to find meaningful relationships.”

Want to make a start? Way suggests sending this article to men you know, whether or not you think they are lonely. “Lots of men need a jumping-off point to start having conversations with other men about this kind of stuff. Send them the article and just ask them: ‘What do you think?’” It could be the start of a beautiful friendship.


Source : BBC Science Focus

The Scent of Flavor

Linda Bartoshuk wrote . . . . . . . . .

ARISTOTLE CONCLUDED that there are five elementary sensations: sight, hearing, touch—encompassing temperature, irritation, and pain—taste, and smell. He was mistaken.

When Aristotle sniffed an apple, he smelled it. When he bit into the apple and the flesh touched his tongue, he tasted it. But he overlooked something that caused 2,000 years of confusion. If Aristotle had plugged his nose when he tasted the apple, he might have noticed that the apple sensation disappeared leaving only sweetness and perhaps some sourness—depending on the apple. He might have decided that the apple sensation was entirely different from the sweet and sour tastes, and he might have decided that there are six elementary sensations. He didn’t. It was not until 1810 that William Prout, then a young student at the University of Edinburgh, plugged his nose and noticed that he could not taste nutmeg. He wrote,

[T]he sensation produced by the nutmeg or any other substance, when introduced into the mouth, and which ceases the moment the nostrils are closed, is really very different from taste, and ought to be distinguished by another name; that that name should be flavor [emphasis original], the one which seems most naturally and properly to designate it.

We now understand the anatomy of the nose and mouth. There is a conduit from the back of the mouth up into the nose called the retronasal space. When we swallow, volatiles released from foods in the mouth are forced through the retronasal space, up into the nose. The perception of those volatiles gives us flavor. If you plug your nose, air currents cannot move through the retronasal space and flavor is blocked.

If Aristotle had recognized flavor as a distinct sensation, he might have paid attention to how taste, flavor, and smell really work together. Taste handles the sensations evoked when nonvolatiles stimulate receptors on the tongue. Flavor and smell respond to volatiles that stimulate receptors in the nose and send signals up the olfactory nerve. But those signals are processed in different parts of the brain. Smell tells us about objects in the world around us and flavor tells us about foods in our mouths. Smell and flavor cannot both use the olfactory nerve at the same time; they must take turns. The brain needs to know which of the senses is using the nerve in order to send the input to the correct area. Sniffing appears to be the cue that signals smell. Taste appears to be the main cue that signals flavor. The evidence for this, documented below, took a long time to gather, but the search has yielded many important insights with clinical and commercial implications.

The Victim of an Illusion

ARISTOTLE’S MISTAKE is understandable when we consider that retronasal olfactory sensations, or flavors, seem to come from the mouth even though we know they come from the nose. Consider the following demonstration. Plug your nose and put a jelly bean in your mouth. Chew it up and swallow it while keeping your nose tightly closed. You will probably taste sweetness and perhaps a bit of sourness, depending on the jelly bean, but you will not perceive the flavor. That is, you won’t know if the jelly bean is lemon flavored, lime flavored, raspberry flavored, or so on. Now unplug your nose. Suddenly you will perceive the flavor. When you unplugged your nose, the volatiles released by chewing the jelly bean traveled up through the retronasal space into your nose and produced a signal in your olfactory nerve that traveled to your brain.

Think about that moment when you perceived the flavor of the jelly bean. You perceived that flavor as coming from your mouth. Even knowing that the volatiles travel into your nose and the flavor sensation comes from your olfactory nerve, you will still perceive it as coming from your mouth. In 1917, two psychologists, Harry Hollingworth and Albert Poffenberger, became fascinated by this illusion. In their book, The Sense of Taste, they explained the localization of flavor to the mouth as, “true largely because of the customary presence of sensations of pressure, temperature, movement, and resistance which are localized in the mouth.”

This conclusion went unchallenged for decades. Research elsewhere supported the idea that the touch sense controls the localization for other sensations. For example, touch controls the localization of thermal stimuli. To demonstrate this, place two quarters in your freezer to make them cold. Hold one in your hand to make it body temperature. Arrange the three quarters on a flat surface with the body-temperature quarter in the middle. Touch the three quarters simultaneously with your index, middle, and ring fingers. All three quarters will feel cold. The touch sensations “capture” the cold sensation so that coldness seems to come from all three quarters.

The Localization of Flavor

WE ARE now able to anesthetize the chorda tympani taste nerve that mediates taste from the front, mobile part of the tongue. The chorda tympani nerve leaves the tongue in a common sheath with the trigeminal nerve, which mediates touch, temperature, irritation, and pain on the tongue. These nerves travel near the nerve mediating pain from lower teeth. When your dentist gives you an injection of lidocaine to block pain when filling a lower tooth, the nearby trigeminal and chorda tympani nerves are also anesthetized. As a result, your tongue becomes numb and you cannot taste on the side of the injection.

The chorda tympani and lingual nerves separate, and the chorda tympani passes through the middle ear, right behind the eardrum, before it travels to the brain. When otolaryngologists anesthetize the eardrum, they also inadvertently anesthetize the chorda tympani nerve.

As part of a study, we asked volunteers to sample yogurt and tell us where they perceived the flavor. The answer: from all around the inside of the mouth. Whether we anesthetized the chorda tympani taste nerve by dental injection—blocking taste and touch—or otolaryngological injection—blocking only taste—the result was the same. In both cases the flavor jumped to the unanesthetized side of the mouth. Our conclusion was that touch was less important: taste controls the perceptual localization of flavor to the mouth.

Is there any biological purpose served by the flavor localization illusion? Olfaction senses objects in the world outside of us, but also senses objects in our mouths.8 We perceptually localize smells to objects in the world around us. Perceptually localizing both taste and flavor to the mouth emphasizes both as attributes of food.

Taste and Flavor Distinction

PROUT’S INSIGHTFUL distinction between taste and flavor did not gain much traction. The only reference to it by his peers that I have ever found is a footnote written by his friend John Elliotson in his translation of a famous Latin text by Johann Friedrich Blumenbach, Institutiones Physiologicae (The Elements of Physiology). Prout gained his real renown for work in physical chemistry on the hydrogen atom. His work so impressed Ernest Rutherford that the proton was almost named the “prouton.”

Prout was not the only scientist to plug his nose in an effort to discover the origin of flavor. In France, just a few years after Prout, two other scientists, the anatomist Hippolyte Cloquet and the chemist Michel Eugène Chevreul, made similar observations. The Scottish philosopher Alexander Bain, one of the earliest to consider psychology a science, demonstrated his increasingly sophisticated understanding of flavor across the three editions of his book, The Senses and the Intellect. In the 1855 edition, “flavour” was “the mixed effect of taste and odour,” but in 1864, Bain noted that tastes are “the same whether the nostrils are opened or closed,” and flavor results when “odorous particles are carried into the cavities of the nose” and ceases when the nostrils are closed. As it turned out, these observations had almost as little impact as Prout’s.

The distinction between taste and flavor became blurred over the course of the twentieth century. The Arthur D. Little company in Boston was the first to market a method for flavor evaluation for the food industry. In 1945, Flavor, written by Ernest Crocker, a chemist working at Arthur D. Little, was published. Crocker used the word “flavor” to denote the aggregation of all the sensations evoked by eating: taste, olfaction, and touch; like Aristotle, Crocker lumped temperature, irritation, and pain in with touch. The sensations evoked when volatiles rise through the retronasal space into the nose were acknowledged to occur but were described simply as a “back entry” for the detection of odors.

Confusion about the sensation evoked by the travel of volatiles through that “back entry” are reflected in the terms used to describe it. We now use “retronasal olfaction,” but that term did not appear in a published paper until 1984.16 Prior to that, an array of terms had been suggested: “nose sensations,” “Gustatorische Reichen” (gustatory smelling), “expiratory smelling,” “nasal chemoreception,” and “in-mouth olfaction,” to name the ones I’ve found.

Robert Moncrieff wrote The Chemical Senses in 1944. The updated edition published in 1960 was considered the standard text for graduate students in my era. Like the position taken at Arthur D. Little, Moncrieff wrote:

Flavour is a complex sensation. It comprises taste, odour, roughness or smoothness, hotness or coldness, and pungency or blandness. The factor which has the greatest influence is odour. If odour is lacking, then the food loses its flavour and becomes chiefly bitter, sweet, sour or saline.

At least Moncrieff argued that odor was the most important.

The International Standards Organization (ISO) is a federation of groups that set standards reflecting the views of at least 75% of the member bodies voting. The ISO definition of flavor is short but far from sweet: “Flavour: complex combination of the olfactory, gustatory and trigeminal sensations perceived during tasting.” Dictionaries do much the same. Merriam-Webster defines “flavor” as, “The quality of something that affects the sense of taste,” and, “The blend of taste and smell sensations evoked by a substance in the mouth.”

Part of the reason for this confusion is that we lack a verb to describe the perception of flavor. Consider how we describe the sensations evoked by taste, smell, and flavor. I can say, “I taste sugar” and “I smell cinnamon,” but not “I flavor cinnamon.” Using “flavor” as a verb means to add flavor to something rather than to perceive the sensation of flavor. When we want to describe how we perceive the flavor of cinnamon we borrow “taste” and say, “I taste cinnamon.” This only adds to the problem.

An Aggregate of All Sensations

SOME EXPERTS that use “flavor” to describe the aggregate of all sensations evoked by eating have argued that this aggregation has a unitary property. That is, the sensations evoked by eating combine to create something that is different from any of them, i.e., an emergent property. The nature of emergent properties arising from combinations of different sensations has been addressed by Michael Kubovy and David Van Valkenburg.

An emergent property of an aggregate is a property that is not present in the aggregated elements. At room temperature, for example, water is a liquid, but the elements that compose it are both gases. Thus, at room temperature, the property liquid is an emergent property of water. There are two kinds of emergent properties: eliminative and preservative. When hydrogen and oxygen combine to form water, the properties of the elements, both being gasses, are not observable; they are eliminated by the process of aggregation. In the human sciences, such eliminative emergent properties are also common: we can mix two colored lights, such as red and yellow, and observers will not be able to tell whether the orange light they observe is a spectral orange or a mixture. Thus, color mixture is an eliminative emergent property. Preservative emergent properties were first noticed in 1890 by Christian von Ehrenfels, who described a melody as being an emergent property of the set of notes comprising it. The notes can be heard; indeed they must be heard for the melody to be recognized. In a melody, the elements are preserved in the process of aggregation; indeed, the emergence of the melody is conditional upon the audibility of the elements.

Even when “flavor” is considered to emerge from the aggregate of all the sensations evoked by eating, most agree that those individual sensations remain perceptible. In The Psychology of Flavor, Richard Stevenson explicitly notes that flavor is a “preservative emergent property.”

I wish that Crocker and the Arthur D. Little company had coined a new name for the aggregation of the sensations evoked by eating. As a result of this oversight, we are left with two meanings for the word “flavor.” There is little that we can do about this now except to point out that “flavor” can be used to denote retronasal olfaction, or the emergent property of the aggregate of sensations evoked by eating. For the remainder of this article, “flavor” refers to retronasal olfaction.

The Lady Who Could Not Taste Lasagna

NUMEROUS STUDIES have shown that altering the intensity of taste alters the intensity of flavor. The first hint of this dynamic was observed in a patient who cut her tongue licking chocolate pudding out of a can with a sharp edge. I asked the patient to describe what she had lost. She told me that her mother-in-law was a superb Italian cook. She described the wonderful smell she experienced coming from her mother-in-law’s lasagna and the terrible disappointment she felt when she took a bite and perceived nothing. This insight caught my attention because I knew that if the patient could smell the lasagna, her olfactory system was intact, and she should have experienced retronasal olfaction—the flavor of the lasagna. I worried about the possibility that the woman was lying in order to get me to testify in court on her behalf. Indeed, she was then in the process of suing the manufacturer of the can that cut her tongue. Nonetheless, I found her account convincing.

I decided to see if I could duplicate her experience with anesthesia. I ate half a chocolate bar and perceived the usual chocolate sensation I had learned to love as a child. I then anesthetized my mouth by rinsing with the topical anesthetic Dyclone and ate the other half of the chocolate bar. Most of the chocolate sensation was gone. The patient who could no longer taste her mother-in-law’s lasagna was right: if taste is taken away, something goes awry with flavor.

One of my students, Derek Snyder, pursued this topic in his PhD thesis, working with clinical colleagues who used unilateral and bilateral injected anesthesia—dental and otolaryngological—as well as topically applied anesthesia to block taste in volunteers. Blocking taste on only one side of the tongue caused retronasal olfactory sensations to drop by 25%. Blocking taste on both sides led to a drop of 50%. Smell sensations were unchanged.

Some individuals experience much more intense taste sensations than do others because of genetic variation—we call these individuals “supertasters”— and some individuals experience altered taste sensations arising from clinical pathologies. The intensity of our taste sensations predicts the intensity of flavor sensations independent of the ability to smell. If supertasters and non-supertasters both sniff a bowl of chocolate pudding, the two groups will experience, on average, the same chocolate smell. But if both groups eat the pudding, the supertasters will experience the more intense chocolate flavor.

Two taste modifiers also reveal the link between taste and flavor. Gymnema sylvestre is an Indian herb that blocks sweet taste. Medicinal use of this herb dates back two thousand years in Ayurvedic medicine. The ability of Gymnema sylvestre to block sweetness was revealed to the Western world by a nineteenth-century Irish botanist, Michael Edgeworth, while he was working in India. On the advice of neighbors, he chewed the leaves of the plant and discovered he could not taste the sugar in his tea. In 1847, Edgeworth wrote a letter to a fellow botanist, telling him about Gymnema sylvestre. The letter was read at the Linnean Society in London and ultimately described in more detail in the Pharmaceutical Journal.

As part of a study, we made tea from Gymnema sylvestre leaves. Volunteers rinsed their mouths with this tea and then sampled maple syrup and chocolate kisses. The sweetness was substantially reduced, and the maple and chocolate sensations were substantially reduced as well. Recovery from the effects of Gymnema sylvestre also demonstrated the link. The sweetness of sugar syrups made with maple, orange, and raspberry flavors were blocked. As the ability to taste sweetness recovered from the effects of Gymnema sylvestre, the sensations of maple, orange, and raspberry recovered at essentially the same pace. Anyone who wants to experience the effects of Gymnema sylvestre can find it online.

The second taste modifier came from berries found on the Synsepalum dulcificum bush, commonly known as miracle fruit. These berries were first described in English by Archibald Dalzel in 1793. Trained as a physician, but not very successful at it, he found himself in need of money and turned to slave trading in Africa. Dalzel’s observations of the local life where he lived in Africa led to a book, The History of Dahomy, in which he describes a “miraculous berry” that can convert “acids to sweets.” Consumption of the berries were first mentioned more than a century earlier in Wilhelm Müller’s Die Africanische Auf der Guineishen Gold-Cust Gelegene Landschafft Fetu (The African on the Guinean Gold-Coast Landscape, Kingdom of Fetu). The berries were presumably known and used long before that. One of the most interesting uses of miracle fruit in Africa during the nineteenth century was to sweeten palm wine that had turned sour during the long journey from distillation to market.

The glycoprotein responsible for the effects of miracle fruit remains intact when the berries are freeze dried. We asked volunteers to let freeze-dried tablets of miracle fruit dissolve on their tongues. The miracle fruit increased the sweetness of tomatoes and strawberries, both foods that contain acid. That increase in sweetness also increased the tomato and strawberry flavors. As is the case with Gymnema sylvestre, anyone wishing to experience these effects can purchase tablets made from the freeze-dried berries online.

Recently, I had a video call with a very important patient: a young woman who had lost the ability to taste, but still retained her sense of smell. Although she is unable to perceive flavors, there may still be a role for some trigeminal sensation. This patient is unable to feel the burn of chilis, but she can perceive touch on her tongue. Thus, there is still a chance that some trigeminal sensations may also open or close the flavor door.

Together, these studies show that taste and retronasal olfaction are distinct sensations that remain distinct even though their perceived intensities are altered in mixtures of the two. Presumably, this occurs in a part of the brain that receives input from both sensations. Taste is not perceptually a part of retronasal olfaction, but rather signals that an incoming olfactory signal should be processed as flavor rather than smell. The taste cue acts like a valve that lets the retronasal olfactory signal pass through or obstructs it to the degree the valve is open or shut.

Following the Scent of Flavor

RESEARCHERS IN the food industry knew that intensifying taste can intensify flavor as early as the 1950s. Rose Marie Pangborn, for example, showed that adding sucrose to apricot juice intensified the apricot sensation. The reverse effect, intensification of sweetness by retronasal perception of volatiles, was found a bit later. One of the earliest hints came from an experiment we undertook during 1977 in which the addition of ethyl butyrate (fruity flavor) increased the taste of saccharin.44 Another hint came from a horticultural study linking the sweetness of tomatoes to specific volatiles present in the tomatoes. In the following forty years, only a few volatiles were identified that could enhance sweetness, and the effects were quite small.

After leaving Yale for the sunny skies of the University of Florida in the early-2000s, I met Harry Klee, a botanist and world expert on the volatiles in tomatoes. Over the course of the twentieth century, tomatoes were bred to look and ship better with little regard for their palatability. This led to a decline in the flavors of tomatoes. Klee wanted to halt this process and restore highly palatable flavors to the tomato. Howard Moskowitz, a Harvard-trained sensory psychologist who had left academia for the food industry, was an expert at improving the flavors of food products using psychophysics and mathematics. His success with spaghetti sauce was chronicled by Malcolm Gladwell in a New Yorker article. I asked him if he would be willing to work with us on tomatoes.

Moskowitz was fascinated by the possibility of applying his techniques from marketing research to the natural world. We grew 150 different varieties of tomatoes that were mostly heirlooms, that is, tomatoes with a lot of genetic diversity. The tomatoes were analyzed for their chemical content—sugars, acids, volatiles—along with their sensory and hedonic properties—smell, taste, flavor, palatability. We used a method that provides valid comparisons for the perceived intensities of sensations across different people: essential when sensory intensities are to be associated with physical measures. The data were then put into a multiple regression model, allowing us to identify which tomatoes were liked the best and which constituents made them the most liked.

On a whim, I used the data we had gathered to explore a different question: which constituents were contributing to sweetness? To my amazement, flavor—retronasal perception of the volatiles—was contributing substantially to sweetness. Checking individual volatiles identified those responsible. A cherry tomato called “Matina,” for example, contained less sugar than another called “Yellow Jelly Bean,” but the Matina was about twice as sweet as the Yellow Jelly Bean. The volatiles that enhanced sweetness were more abundant in Matina.

We then moved on to strawberries, oranges, and peaches. Each fruit produced a mostly new and different group of sweetness-enhancing volatiles, yielding almost 100 volatiles in total. One exception was blueberries, which contained very few volatiles that enhanced sweetness. When you taste sweetness in a blueberry, you are essentially tasting the sugar. When you taste sweetness in the other fruits we studied, some of the sweetness is coming from the sugar, but a lot of it originates in the volatiles that enhance the sweetness of the sugar. In the early years of studying volatile-enhanced sweetness, none of us had realized that some fruits contain many such volatiles. Each one may produce only a small effect, but the effects are cumulative.

Future Applications

SWEETNESS-ENHANCING volatiles are naturally found in fruits, but adding these volatiles to any food or beverage will add sweetness. Incidentally, sweetness-enhancing volatiles also work on artificial sweeteners. The concentrations of many sweet-enhancing volatiles in fruits are very low, making them a safe alternative to sugars and artificial sweeteners. Since the volatiles that enhance sweetness tend to be different in each fruit, the study of additional fruits will likely add to the list of those already identified.

Noam Sobel and his team have demonstrated that olfactory mixtures behave like mixtures of colored lights. Combinations involving odorants of equal perceived intensities suppress one another resulting in a weak olfactory sensation they called “Laurax”—not to be confused with the famous Lorax described by Dr. Seuss. Laurax was also called “olfactory white” to emphasize its similarity to the white light that can result from color mixtures. In the terminology of Kubovy and Van Valkenburg, these are examples of eliminative emergence. This raises an interesting question: as we combine more and more volatiles that enhance sweetness, will their flavors cancel each other out while the sweetness increases? If so, volatile sweetening will have even more commercial applications.

The ability of volatiles to enhance taste is not limited to sweetness. A different group of volatiles enhance saltiness and are under study for their potential to reduce dependence on sodium. A few volatiles have also been identified that can enhance sourness and bitterness. This may tell us more about how this enhancement occurs in the brain, but these volatiles are unlikely to have as many applications as those that enhance sweetness and saltiness.

Volatile-enhanced tastes are also exciting for their clinical potential. Shortly before the COVID-19 pandemic began, I evaluated a patient who retained normal olfaction but had a reduced ability to taste sweetness. Adding sweetness-enhancing volatiles to sucrose allowed her to perceive normal sweetness. The sweetness-enhancing volatiles created a signal in her olfactory nerve that traveled to the area of the brain that processes sweetness, bypassing her damaged taste nerves. In theory, when we have identified enough taste-enhancing volatiles, we should be able to restore at least some taste perception to patients with taste nerve damage and intact olfaction.

Our love of sweet and salty tastes is at least partly hardwired into our brains. This source of pleasure is important in our lives. The interactions between the distinct sensations of taste and flavor have given us new tools to safeguard those pleasures while reducing our dependence on sugars, artificial sweeteners, and sodium.


Source : Inference

How Bell’s Theorem Proved ‘Spooky Action at a Distance’ Is Real

Ben Brubaker wrote . . . . . . . . .

We take for granted that an event in one part of the world cannot instantly affect what happens far away. This principle, which physicists call locality, was long regarded as a bedrock assumption about the laws of physics. So when Albert Einstein and two colleagues showed in 1935 that quantum mechanics permits “spooky action at a distance,” as Einstein put it, this feature of the theory seemed highly suspect. Physicists wondered whether quantum mechanics was missing something.

Then in 1964, with the stroke of a pen, the Northern Irish physicist John Stewart Bell demoted locality from a cherished principle to a testable hypothesis. Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. In the years since, experiments have vindicated quantum mechanics again and again.

Bell’s theorem upended one of our most deeply held intuitions about physics, and prompted physicists to explore how quantum mechanics might enable tasks unimaginable in a classical world. “The quantum revolution that’s happening now, and all these quantum technologies — that’s 100% thanks to Bell’s theorem,” says Krister Shalm, a quantum physicist at the National Institute of Standards and Technology.

Here’s how Bell’s theorem showed that “spooky action at a distance” is real.

Ups and Downs

The “spooky action” that bothered Einstein involves a quantum phenomenon known as entanglement, in which two particles that we would normally think of as distinct entities lose their independence. Famously, in quantum mechanics a particle’s location, polarization and other properties can be indefinite until the moment they are measured. Yet measuring the properties of entangled particles yields results that are strongly correlated, even when the particles are far apart and measured nearly simultaneously. The unpredictable outcome of one measurement appears to instantly affect the outcome of the other, regardless of the distance between them — a gross violation of locality.

To understand entanglement more precisely, consider a property of electrons and most other quantum particles called spin. Particles with spin behave somewhat like tiny magnets. When, for instance, an electron passes through a magnetic field created by a pair of north and south magnetic poles, it gets deflected by a fixed amount toward one pole or the other. This shows that the electron’s spin is a quantity that can have only one of two values: “up” for an electron deflected toward the north pole, and “down” for an electron deflected toward the south pole.

Imagine an electron passing through a region with the north pole directly above it and the south pole directly below. Measuring its deflection will reveal whether the electron’s spin is “up” or “down” along the vertical axis. Now rotate the axis between the magnet poles away from vertical, and measure deflection along this new axis. Again, the electron will always deflect by the same amount toward one of the poles. You’ll always measure a binary spin value — either up or down — along any axis.

It turns out it’s not possible to build any detector that can measure a particle’s spin along multiple axes at the same time. Quantum theory asserts that this property of spin detectors is actually a property of spin itself: If an electron has a definite spin along one axis, its spin along any other axis is undefined.

Local Hidden Variables

Armed with this understanding of spin, we can devise a thought experiment that we can use to prove Bell’s theorem. Consider a specific example of an entangled state: a pair of electrons whose total spin is zero, meaning measurements of their spins along any given axis will always yield opposite results. What’s remarkable about this entangled state is that, although the total spin has this definite value along all axes, each electron’s individual spin is indefinite.

Suppose these entangled electrons are separated and transported to distant laboratories, and that teams of scientists in these labs can rotate the magnets of their respective detectors any way they like when performing spin measurements.

When both teams measure along the same axis, they obtain opposite results 100% of the time. But is this evidence of nonlocality? Not necessarily.

Alternatively, Einstein proposed, each pair of electrons could come with an associated set of “hidden variables” specifying the particles’ spins along all axes simultaneously. These hidden variables are absent from the quantum description of the entangled state, but quantum mechanics may not be telling the whole story.

Hidden variable theories can explain why same-axis measurements always yield opposite results without any violation of locality: A measurement of one electron doesn’t affect the other but merely reveals the preexisting value of a hidden variable.

Bell proved that you could rule out local hidden variable theories, and indeed rule out locality altogether, by measuring entangled particles’ spins along different axes.

Suppose, for starters, that one team of scientists happens to rotate its detector relative to the other lab’s by 180 degrees. This is equivalent to swapping its north and south poles, so an “up” result for one electron would never be accompanied by a “down” result for the other. The scientists could also choose to rotate it an in-between amount — 60 degrees, say. Depending on the relative orientation of the magnets in the two labs, the probability of opposite results can range anywhere between 0% and 100%.

Without specifying any particular orientations, suppose that the two teams agree on a set of three possible measurement axes, which we can label A, B and C. For every electron pair, each lab measures the spin of one of the electrons along one of these three axes chosen at random.

Let’s now assume the world is described by a local hidden variable theory, rather than quantum mechanics. In that case, each electron has its own spin value in each of the three directions. That leads to eight possible sets of values for the hidden variables, which we can label in the following way:

The set of spin values labeled 5, for instance, dictates that the result of a measurement along axis A in the first lab will be “up,” while measurements along axes B and C will be “down”; the second electron’s spin values will be opposite.

For any electron pair possessing spin values labeled 1 or 8, measurements in the two labs will always yield opposite results, regardless of which axes the scientists choose to measure along. The other six sets of spin values all yield opposite results in 33% of different-axis measurements. (For instance, for the spin values labeled 5, the labs will obtain opposite results when one measures along axis B while the other measures along C; this represents one-third of the possible choices.)

Thus the labs will obtain opposite results when measuring along different axes at least 33% of the time; equivalently, they will obtain the same result at most 67% of the time. This result — an upper bound on the correlations allowed by local hidden variable theories — is the inequality at the heart of Bell’s theorem.

Above the Bound

Now, what about quantum mechanics? We’re interested in the probability of both labs obtaining the same result when measuring the electrons’ spins along different axes. The equations of quantum theory provide a formula for this probability as a function of the angles between the measurement axes.

According to the formula, when the three axes are all as far apart as possible — that is, all 120 degrees apart, as in the Mercedes logo — both labs will obtain the same result 75% of the time. This exceeds Bell’s upper bound of 67%.

That’s the essence of Bell’s theorem: If locality holds and a measurement of one particle cannot instantly affect the outcome of another measurement far away, then the results in a certain experimental setup can be no more than 67% correlated. If, on the other hand, the fates of entangled particles are inextricably linked even across vast distances, as in quantum mechanics, the results of certain measurements will exhibit stronger correlations.

Since the 1970s, physicists have made increasingly precise experimental tests of Bell’s theorem. Each one has confirmed the strong correlations of quantum mechanics. In the past five years, various loopholes have been closed. Physicists continue to grapple with the implications of Bell’s theorem, but the standard takeaway is that locality — that long-held assumption about physical law — is not a feature of our world.


Source : Quanta Magazine

Chart: Comparison of R&D of U.S., China and Japan

Source : Nikkei

Scientists Know Why We Are So Indecisive – and How to Get Over It

Sarah Sloat wrote . . . . . . . . .

WHEN YOU MAKE a decision, you try to control the future. If you take a new job in a new city, you also try to move toward a vision of a potentially better you. The same applies to other life-changing choices, like whether or not to have kids. Deciding to turn what-ifs into reality is what propels your story forward.

But the hard truth is that study after study shows humans are not great decision-makers. We restrict life’s possibilities to a narrow subset of choices; we tend to omit some of the most important objectives, and we’re not good at estimating the probability of certain outcomes.

“People are generally quite bad at perceiving and using probability information,” says Katherine Fox-Glassman, a psychology professor at Columbia University who studies decision-making.

“Our brains are really well suited to so many things — understanding uncertainty is not one of those things for most people,” Fox-Glassman tells me. “People misinterpret, distort, ignore, and misuse probability in dozens of well-documented ways.”

Fox-Glassman’s students often tell her they’re excited to take her class because they want to learn to make better decisions. By the time the semester ends, they tend to report that they didn’t achieve their original goal — but they do pay more attention to how they make their choices.

“I’ve had that experience, and it’s absolutely a double-edged sword — any time you’re bringing more strategy or wisdom to a situation, you’re also raising the possibility of overthinking things or getting too in your head about it,” she says.

Ultimately, making a big decision is about balance — and action. If you don’t make a choice, someone — or something — will make it for you.

THE FACTORS THAT GO INTO A DECISION

Studies show that people aren’t very good at estimating the probability of certain outcomes. Getty Images
We make many different types of decisions thousands of times a day, but a good chunk of the most consequential choices are made in the context of social interactions. In practice, this could mean deciding whether or not to break up with someone — or whether or not to speak your mind to your boss about something that’s bothering you.

Successful social decision-making typically depends on how well we understand the intentions, emotions, and beliefs of others. When you make a social decision, you factor in traditional decision-making — which typically involves the processes of learning, valuation, and feedback — as well as the mental state of the other person or people involved. This process engages specific neural networks.

This type of decision-making is also where reason meets emotion. Studies show that most of the time sentimental decisions are the result of intuitive processing while practical decisions are the result of rational processing. But this isn’t true for all people — especially people who are emotionally connected to their work.

There’s no one-size-fits-all way everyone approaches decision-making because people are individuals. Some people might make a pro and con list when deciding whether or not to date someone — others might just go with their gut.

HOW TO FEEL CONFIDENT ABOUT YOUR DECISIONS

Because every decision, and every decision maker, will be different there is no universal first step for approaching a choice, Fox-Glassman says.

But, she says, “it can be helpful for the decision maker to list their goals and then to look at what type of each goal is.”

If the decision is more of a practical one or concerns money, writing out an objective pros-and-cons list can help. That kind of calculation-style approach works well for meeting easily quantifiable goals, Fox-Glassman explains.

If goals are more social — like trying to win others’ approval — a more rule-based decision can be more appropriate, she says. This is thinking that something is the right thing to do, regardless of the costs. Meanwhile, decisions about emotional goals — choosing what feels good, avoiding what feels bad — might be made without you even realizing you’ve made a decision yet.

Most big decisions involve elements of all of these goals, Fox-Glassman says. And multiple goals can conflict — for example, you may want to have another child, but the family finances just aren’t there.

“When different modes of decision-making each lead us to a different answer, that’s unpleasant and might us feel bad about any choice we make — or even hesitant to decide at all,” she says. “But if we can figure out where the conflict came from, we may have the option to decide which mode we want to follow, or which of the two conflicting goals is more important to us. It’ll still be a trade-off, but at least it’s more transparent.”

In the end, any decision can end up being the best choice. Studies suggest that people who dwell in a state of inaction over decisions are more likely to feel regret than those who make a decision. And once you do make a hard decision, your brain adjusts its preferences — meaning you’re more likely to be able to make another hard decision in the future.


Source : Inverse

Many Scientists See Fusion as the Future of Energy – and They’re Betting Big

Dominic Bliss wrote . . . . . . . . .

The hottest place in our solar system is not the Sun, as you might think, but a machine near a south Oxfordshire village called Culham. Housed inside a vast hangar, it’s a nuclear fusion experiment called JET, or Joint European Torus. When operating, temperatures here can reach 150 million degrees Celsius – ten times hotter than the centre of the Sun. On December 21st 2021, JET set a new record by producing 59 megajoules of sustained energy through a process known as nuclear fusion.

59 megajoules isn’t a huge amount; just enough to power three domestic tumble dryer cycles. Nevertheless, as far as humanity is concerned, proof that nuclear fusion works is a very big deal indeed. Fusion produces energy by fusing atomic nuclei together, the opposite of what happens in all nuclear power stations, where atomic nuclei are split through nuclear fission. Once harnessed on a commercial scale, fusion could produce so much energy from so little raw material, that it may solve all of humanity’s energy problems in one fell swoop – amongst many other things.

Professor Stephen Hawking was once famously asked which problem he hoped scientists might solve before the end of the 21st century. “I would like nuclear fusion to become a practical power source,” he replied. “It would provide an inexhaustible supply of energy, without pollution or global warming.”

Right now, though, JET requires more energy to operate than it produces. Net energy gain – the holy grail of nuclear fusion since that would release more energy than it consumes – still eludes the scientists at JET; and indeed every other scientist working in this field.

Up close, JET is truly awesome. While the experts at the Culham Centre for Fusion Energy (overseen by the UK Atomic Energy Authority) are familiar with every inch of this huge machine, to the untrained eye it’s a bewildering, asymmetrical jumble of steel bars, joists, cages, ladders, wires, cables, pipes, ducts, switches, monitors, valves, plugs, scaffolding, catwalks and steel runners.

On the outside it’s 12 metres tall with a diameter of over seven metres. The entire machine weighs 2,800 tonnes. Hidden somewhere in the centre is the doughnut-shaped (or toroidal) vessel called a tokamak. (Based on early Soviet designs from the 1950s, ‘tokamak’ is an acronym derived from Russian phrases meaning ‘toroidal chamber’ and ‘magnetic coil’.)

Although nuclear fusion reactors are far safer than nuclear power stations (more of that later), the security and safety at Culham is understandably tight. JET itself is housed behind one-metre-thick, 20-metre-tall concrete barriers, which close during operation, primarily to contain dangerous neutrons produced by the fusion reaction. Entry is through a security turnstile, with each visitor measured by a dosimeter for radiation levels on entry and exit.

First operational in 1983, JET has produced nuclear fusion pulses on tens of thousands of separate occasions. At the end of next year, after 40 years of service, it will make its swan-song before eventually being decommissioned. The scientific understanding and much of the technology it has proven will be used in the next generation of tokamak fusion projects. Currently being constructed near Marseille, in the south of France, is the International Thermonuclear Experimental Reactor, or ITER – a collaboration of 35 nations, including the UK. There are also plans for a British project called Spherical Tokamak for Energy Production, or STEP. On 3 October its location was confirmed as the site of the West Burton power station in Nottinghamshire.

The man in charge of JET is the UK Atomic Energy Authority’s CEO, Professor Ian Chapman. He predicts that ITER will start achieving net energy gain by the late 2040s. Ask him when nuclear fusion might produce cost-effective energy on a commercial scale, and he’s less precise.

“That’s an imponderable question and depends so much on energy dynamics, government policy, and what’s going on with carbon pricing,” he tells National Geographic UK. “I never answer the question. I always quote Lev Artsimovich, one of the founding fathers of the tokamak. He was asked this question at a press conference in the Soviet Union in the 1970s, and his answer was: ‘When mankind needs it, maybe a short time before that.’ I think that’s still true.”

Fusion futures

With the fuel crisis currently dominating UK headlines, Chapman points out how the energy we generate using current methods will eventually become so expensive that governments and private companies will be impelled into investing further and taking more risks to harness nuclear fusion. He explains how original investment for JET started in the late 1970s, in the aftermath of the global oil crisis. Now, energy insecurity fomented by the war in Ukraine might prove to be a similar catalyst for nuclear fusion.

“Energy policy happens on decadal timeframes,” he adds. “No parliaments anywhere in the world work on decadal timeframes so, unfortunately, it’s shocks to the market which generally stimulate action in energy.”

Even with massive investment, there are very high hurdles still to overcome: technical challenges such as fuel performance and reactor maintenance; political challenges, too, although the Americans, the Europeans, the Russians, the Chinese, the Japanese and the Australians have all warmed to the idea.

As have Britons. In October 2021, the Department for Business, Energy & Industrial Strategy published its strategy on nuclear fusion. This form of energy, it notes, will be abundant, efficient, carbon-free, safe, and will produce radioactive waste much shorter-lived than that of current nuclear power stations.

Arthur Turrell is a former plasma physicist at Imperial College London, and author of a 2021 book, The Star Builders: Nuclear Fusion & the Race to Power the Planet. He says that “controlling fusion to produce energy is the biggest technological challenge we’ve ever taken on as a species”. He explains how fusion reactors, or “star machines”, are indescribably complex, with tens of millions of individual parts.

The science bit

So just how does nuclear fusion work? It is the fusing of light nuclei to form a heavier nucleus, at the same time releasing huge amounts of energy. It’s what happens in the middle of stars like our Sun, providing the power that drives the universe. Crucially, it’s the opposite of nuclear fission, the process used in nuclear power stations whereby huge amounts of energy are released when nuclei are split apart to form smaller nuclei.

The Sun notwithstanding, humans are currently experimenting with two main methods of fusion. JET, for example, uses what’s known as magnetic confinement fusion: two isotopes of hydrogen – deuterium and tritium – are heated to temperatures up to 150 million degrees Celsius, becoming an electrically-charged gas called plasma, which is confined in the doughnut-shaped tokamak, and controlled with strong magnetic fields. The deuterium and tritium fuses together to produce helium and high-speed neutrons, releasing vast amounts of energy in the process – 10 million times more energy per kg of fuel than that released by burning fossil fuels. As Turrell neatly explains, the mass of deuterium-tritium fuel equivalent to an Olympic swimming pool of water would contain more energy than the entire planet uses in a year.

The other fusion method is called inertial confinement fusion, using powerful lasers to heat and compress deuterium and tritium inside a capsule. One of the leading developments in this is at the National Ignition Facility (NIF) in California.

Of course, proving that nuclear fusion works is not the same as harnessing it on a commercial scale. There used to be a common quip traded between nuclear physicists, something along these lines: “Nuclear fusion is 30 years away; and always will be.”

That old quip is starting to lose power just as nuclear fusion is starting to gain it. All over the world there are fusion pioneers on Promethean missions to steal the Sun’s energy production process, and replicate it here on Earth. It’s estimated that, currently, there are over 100 experimental fusion reactors worldwide, some under construction, others already operating. As Turrell explains: “Public and private, big and small, star machines are taking off.”

In the UK alone, there are four major facilities – all currently in Oxfordshire: in addition to JET, there is Tokamak Energy, First Light Fusion, and General Fusion.

Ultimately, they are all striving for net energy gain. Ask Turrell where he believes this might first occur, and his eye is drawn to the National Ignition Facility in California where, already, they have achieved 70 per cent of net energy gain. He suggests they are “a small tweak away” from reaching 100 per cent.

Chapman delights in all this competition. “This is all good for the community,” he says. “We all want nuclear fusion to happen. We should try a diverse range of different options. Spend more money, take more risks.”

He compares this noble endeavour to the space race between the United States and the Soviet Union in the 1960s. “When Kennedy made his speech, it was inconceivable that, seven years later, man would walk on the Moon. If you have the political imperative to go really fast and spend money then you can achieve incredible things. The US was spending over four per cent of GDP on the space race.”

Fans of fusion suggest this sustainable form of energy may eventually replace all our nuclear fission power stations. There are a number of clear benefits.

Firstly, the fuel supply is abundant. “Deuterium is outrageously common,” Turrell writes in his book. “Tritium… can be made from another element that is extremely plentiful: lithium. Perfecting power from nuclear fusion could provide humanity with millions, perhaps billions, of years of clean energy.”

Chapman concurs: “For the rest of my life, all the fuel I’d need is the water that would fit in a bathtub and the lithium that would fit in two laptop batteries. That’s all I’d need, for 60 years.” However, some critics point out that the Earth’s supplies of tritium won’t be sufficient. One solution, which ITER is exploring, is to manufacture tritium from lithium using what they call breeding blankets. These would form part of the reactor wall and cause neutrons to react with lithium in the blanket to produce further tritium. If it works, power plants could end up being self-sufficient in tritium.

Tackling fear

Inevitably, mere mention of the word “nuclear” fills many energy consumers with dread. Chapman understands why but quickly rebuts the thought, stressing how fusion is virtually risk-free in comparison to fission. This is the second clear benefit.

“In a fission plant, there’s enough fuel in there for two or three weeks,” he says. “If a really off-normal thing happens, like a tidal wave or an earthquake, that fuel will keep going for two or three weeks. You’ve got no control over it. In fusion, there’s enough fuel inside the machine for about ten seconds, so if you want it to stop, it just stops. It’s physically impossible to have a chain reaction process. I’ve spent my 20-year career trying to keep the bloody thing going.”

While those working in nuclear fusion are clearly biased, they all agree that this form of energy will be vital in an energy-greedy world. Renewable energy will still play an important part, but renewables may not provide enough.

“We want to make the world a better place and allow everyone access to sustainable energy,” Chapman says. “We should do renewables everywhere we can. But they just don’t work everywhere – if you don’t have access to sun or wind, for example. Fusion works everywhere and the fuel is readily available. It stops energy poverty; it gives us energy equality; it means we stop having wars over energy. It would be such a massive revolution and such an important part of the future portfolio of energy.”

He believes nuclear fusion will change the world as radically as the Industrial Revolution did. Turrell goes one step further, suggesting this form of energy could end up powering the spaceships that eventually transport humans on interstellar journeys. “Fusion rockets are humanity’s best hope for travelling across the vast distances of space,” he told National Geographic UK.

Back in the hangar at Culham, JET sits idle, waiting for its next experiment. Before it is eventually decommissioned in late 2023, it will conduct several further fusion experiments, mainly on behalf of the new ITER plant.

Meanwhile it lies like a sleeping dragon. Once it wakes, you’d be wise to keep your distance. When this dragon breathes fire, it burns at 150 million degrees.


Source : National Geographic

Wearable Sensors Styled into T-shirts and Face Masks

Caroline Brogan wrote . . . . . . . . .

Imperial researchers have embedded new low-cost sensors that monitor breathing, heart rate, and ammonia into t-shirts and face masks.

Potential applications range from monitoring exercise, sleep, and stress to diagnosing and monitoring disease through breath and vital signs.

Spun from a new Imperial-developed cotton-based conductive thread called PECOTEX, the sensors cost little to manufacture. Just $0.15 produces a metre of thread to seamlessly integrate more than ten sensors into clothing, and PECOTEX is compatible with industry-standard computerised embroidery machines.

First author of the research Fahad Alshabouna, PhD candidate at Imperial’s Department of Bioengineering, said: “The flexible medium of clothing means our sensors have a wide range of applications. They’re also relatively easy to produce which means we could scale up manufacturing and usher in a new generation of wearables in clothing.”

The researchers embroidered the sensors into a face mask to monitor breathing, a t-shirt to monitor heart activity, and textiles to monitor gases like ammonia, a component of the breath that can be used to detect liver and kidney function. The ammonia sensors were developed to test whether gas sensors could also be manufactured using embroidery.

Fahad added: “We demonstrated applications in monitoring cardiac activity and breathing, and sensing gases. Future potential applications include diagnosing and monitoring disease and treatment, monitoring the body during exercise, sleep, and stress, and use in batteries, heaters, and anti-static clothing.”

The research is published in Materials Today.

Seamless sensors

Wearable sensors, like those on smartwatches, let us continuously monitor our health and wellbeing non-invasively. Until now, however, there has been a lack of suitable conductive threads, which explains why wearable sensors seamlessly integrated into clothing aren’t yet widely available.

Enter PECOTEX. Developed and spun into sensors by Imperial researchers, the material is machine washable, and is less breakable and more electrically conductive than commercially available silver-based conductive threads, meaning more layers can be added to create complex types of sensor.

The researchers tested the sensors against commercially available silver-based conductive threads during and after they were embroidered into clothing.

During embroidery, PECOTEX was more reliable and less likely to break, allowing for more layers to be embroidered on top of each other.

After embroidery, PECOTEX demonstrated lower electrical resistance than the silver-based threads, meaning they performed better at conducting electricity.

Lead author Dr Firat Güder, also of the Department of Bioengineering, said: “PECOTEX is high-performing, strong, and adaptable to different needs. It’s readily scalable, meaning we can produce large volumes inexpensively using both domestic and industrial computerised embroidery machines.

“Our research opens up exciting possibilities for wearable sensors in everyday clothing. By monitoring breathing, heart rate, and gases, they can already be seamlessly integrated, and might even be able to help diagnose and monitor treatments of disease in the future.”

The embroidered sensors retained the intrinsic properties of the fabric such as wearability, breathability and feel-on-the-skin. They are also machine washable at up to 30°C.

Next, the researchers will explore new application areas like energy storage, energy harvesting and biochemical sensing for personalised medicine, as well as finding partners for commercialisation.


Source: Imperial College