Data, Info and News of Life and Economy

Category Archives: Science

Scientists Design Skin Patch That Takes Ultrasound Images

The future of ultrasound imaging could be a sticker affixed to the skin that can transmit images continuously for 48 hours.

Researchers at Massachusetts Institute of Technology (MIT) have created a postage stamp-sized device that creates live, high-resolution images. They reported on their progress this week.

“We believe we’ve opened a new era of wearable imaging: With a few patches on your body, you could see your internal organs,” said co-senior study author Xuanhe Zhao, a professor of mechanical engineering and civil and environmental engineering at MIT.

The sticker — about 3/4-inch across and about 1/10-inch thick — could be a substitute for bulky, specialized ultrasound equipment available only in hospitals and doctor’s office, where technicians apply a gel to the skin and then use a wand or probe to direct sound waves into the body.

The waves reflect back high-resolution images of a major blood vessels and deeper organs such as the heart, lungs and stomach. While some hospitals already have probes affixed to robotic arms that can provide imaging for extended periods, the ultrasound gel dries over time.

For now, the stickers would still have to be connected to instruments, but Zhao and other researchers are working on a way to operate them wirelessly.

That opens up the possibility of patients wearing them at home or buying them at a drug store. Even in their current design, they could eliminate the need for a technician to hold a probe in place for a long time.

In the study, the patches adhered well to the skin, enabling researchers to capture images even if volunteers moved from sitting to standing, jogging and biking.

“We envision a few patches adhered to different locations on the body, and the patches would communicate with your cellphone, where AI algorithms would analyze the images on demand,” Zhao explained in an MIT news release.

A different approach tested — stretchable ultrasound probes — yielded images with poor resolution.

“[A] Wearable ultrasound imaging tool would have huge potential in the future of clinical diagnosis. However, the resolution and imaging duration of existing ultrasound patches is relatively low, and they cannot image deep organs,” said co-lead author Chonghe Wang, a graduate student who works in Zhao’s Lab.

The MIT team’s new ultrasound sticker produces higher resolution images by pairing a stretchy adhesive layer with a rigid array of transducers (they convert energy from one form to another). In the middle is a solid hydrogel that transmits sound waves. The adhesive layer is made from two thin layers of elastomer.

“The elastomer prevents dehydration of hydrogel,” co-lead author Xiaoyu Chen explained. “Only when hydrogel is highly hydrated can acoustic waves penetrate effectively and give high-resolution imaging of internal organs.”

Healthy volunteers wore the stickers on various areas, including the neck, chest, abdomen and arms. The stickers produced clear images of underlying structures, including the changing diameter of major blood vessels, for up to 48 hours. They stayed attached while volunteers sat, stood, jogged, biked and lifted weights.

They showed how the heart changes shape as it exerts during exercise and how the stomach swells, then shrinks, as volunteers drank and then eliminated juice. Researchers also could detect signs of temporary micro-damage in muscles as volunteers lifted weights.

“With imaging, we might be able to capture the moment in a workout before overuse, and stop before muscles become sore,” Chen said. “We do not know when that moment might be yet, but now we can provide imaging data that experts can interpret.”

In addition to working on wireless technology for the stickers, the team is developing software algorithms based on artificial intelligence that can better interpret the ultrasound images.

Zhao thinks patients may one day be able to buy stickers that could be used to monitor internal organs, the progression of tumors and development of fetuses in the womb.

“We imagine we could have a box of stickers, each designed to image a different location of the body,” Zhao said. “We believe this represents a breakthrough in wearable devices and medical imaging.”

The findings were published in Science.

Source: HealthDay

赴天宫相会 向星河“问天”

记者: 余建斌 . . . . . . . . .

























Source : 新华网

Infographic: Earth’s Tectonic Plates

See large image . . . . . .

Source : Visual Capitalist

Infographic: Elements Making Up the Human Body

See large image . . . . . .

Source : Visual Capitalist

Soy Sauce’s Salt-enhancing Peptides

Soy sauce deepens the flavor of soup stocks, gives stir-fried rice its sweet-salty glaze and makes a plate of dumplings absolutely enjoyable. But what exactly makes this complex, salty, umami sauce so tasty? Now, researchers reporting in ACS’ Journal of Agricultural and Food Chemistry have discovered the proteins and other compounds that give soy sauce its distinctive flavors and they say that proteins and peptides help make it salty.

Understanding how foods taste the way they do can help producers tailor their growing or manufacturing methods or modify the final product to boost certain flavors. Decoding the flavors of fermented foods like soy sauce is particularly challenging because they arise from complex processes, including the microbial breakdown of proteins and other compounds, that happen over a long period of time.

Though several compounds in soy sauce are known, no complete profile of its flavor agents has been developed. So, Thomas Hofmann and colleagues wanted to carry out a full assessment of the chemicals behind soy sauce’s flavor profile and test the completeness of this profile by using the compounds to recreate the seasoning’s distinctive taste.

The team started by trying to recreate soy sauce’s taste with a mixture of compounds known to be involved in its flavor. A panel of taste experts found that this recreated soy sauce wasn’t quite right — it wasn’t as salty or as bitter as the authentic product.

The team then searched for other, unknown flavor compounds, hypothesizing that small proteins could potentially be the missing ingredient. Using various chemical and taste analysis methods, they identified a collection of proline-modified dipeptides and other larger, newly identified proteins that enhanced umami and other flavors.

Several of the proteins were discovered to contribute to a salty sensation, which, in soy sauce, had only previously been attributed to table salt and other minerals. After mixing a sample containing over 50 individual compounds, the team was finally able to recreate the complex taste of soy sauce.

This profile could help producers optimize fermentation conditions to boost desirable compounds and tailor the taste of the final product, the researchers say.

Source: American Chemical Society

Cambodian Catches World’s Largest Recorded Freshwater Fish

Jerry Harmer wrote . . . . . . . . .

The world’s largest recorded freshwater fish, a giant stingray, has been caught in the Mekong River in Cambodia, according to scientists from the Southeast Asian nation and the United States.

The stingray, captured on June 13, measured almost four meters (13 feet) from snout to tail and weighed slightly under 300 kilograms (660 pounds), according to a statement Monday by Wonders of the Mekong, a joint Cambodian-U.S. research project.

The previous record for a freshwater fish was a 293-kilogram (646-pound) Mekong giant catfish, discovered in Thailand in 2005, the group said.

The stingray was snagged by a local fisherman south of Stung Treng in northeastern Cambodia. The fisherman alerted a nearby team of scientists from the Wonders of the Mekong project, which has publicized its conservation work in communities along the river.

The scientists arrived within hours of getting a post-midnight call with the news, and were amazed at what they saw.

“Yeah, when you see a fish this size, especially in freshwater, it is hard to comprehend, so I think all of our team was stunned,” Wonders of the Mekong leader Zeb Hogan said in an online interview from the University of Nevada in Reno. The university is partnering with the Cambodian Fisheries Administration and USAID, the U.S. government’s international development agency.

Freshwater fish are defined as those that spend their entire lives in freshwater, as opposed to giant marine species such as bluefin tuna and marlin, or fish that migrate between fresh and saltwater like the huge beluga sturgeon.

The stingray’s catch was not just about setting a new record, he said.

“The fact that the fish can still get this big is a hopeful sign for the Mekong River, ” Hogan said, noting that the waterway faces many environmental challenges.

The Mekong River runs through China, Myanmar, Laos, Thailand, Cambodia and Vietnam. It is home to several species of giant freshwater fish but environmental pressures are rising. In particular, scientists fear a major program of dam building in recent years may be seriously disrupting spawning grounds.

“Big fish globally are endangered. They’re high-value species. They take a long time to mature. So if they’re fished before they mature, they don’t have a chance to reproduce,” Hogan said. “A lot of these big fish are migratory, so they need large areas to survive. They’re impacted by things like habitat fragmentation from dams, obviously impacted by overfishing. So about 70% of giant freshwater fish globally are threatened with extinction, and all of the Mekong species.”

The team that rushed to the site inserted a tagging device near the tail of the mighty fish before releasing it. The device will send tracking information for the next year, providing unprecedented data on giant stingray behavior in Cambodia.

“The giant stingray is a very poorly understood fish. Its name, even its scientific name, has changed several times in the last 20 years,” Hogan said. “It’s found throughout Southeast Asia, but we have almost no information about it. We don’t know about its life history. We don’t know about its ecology, about its migration patters.”

Researchers say it’s the fourth giant stingray reported in the same area in the past two months, all of them females. They think this may be a spawning hotspot for the species.

Local residents nicknamed the stingray “Boramy,” or “full moon,” because of its round shape and because the moon was on the horizon when it was freed on June 14. In addition to the honor of having caught the record-breaker, the lucky fisherman was compensated at market rate, meaning he received a payment of around $600.

Source : AP

A Glucose Meter Could Soon Say Whether You Have SARS-CoV-2 Antibodies

Over-the-counter COVID tests can quickly show whether you are infected with SARS-CoV-2. But if you have a positive result, there’s no equivalent at-home test to assess how long you’re protected against reinfection. In the Journal of the American Chemical Society, researchers now report a simple, accurate glucose-meter-based test incorporating a novel fusion protein. The researchers say that consumers could someday use this assay to monitor their own SARS-CoV-2 antibody levels.

Vaccines against SARS-CoV-2 and infection with the virus itself can guard against future infections for a while, but it’s unclear exactly how long that protection lasts. A good indication of immune protection is a person’s level of SARS-CoV-2 antibodies, but the gold standard measurement – the enzyme-linked immunosorbent assay (ELISA) – requires expensive equipment and specialized technicians.

Enter glucose meters, which are readily available, easy to use and can be integrated with remote clinical services. Researchers have been adapting these devices to sense other target molecules, coupling detection with glucose production. For example, if a detection antibody in the test binds to an antibody in a patient’s blood, then a reaction occurs that produces glucose — something the device detects very well. Invertase is an attractive enzyme for this type of analysis because it converts sucrose into glucose, but it’s difficult to attach the enzyme to detection antibodies with chemical approaches. So, Netzahualcóyotl Arroyo-Currás, Jamie B. Spangler and colleagues wanted to see whether producing a fusion protein consisting of both invertase and a detection antibody would work in an assay that would allow SARS-CoV-2 antibody levels to be read with a glucose meter.

The researchers designed and produced a novel fusion protein containing both invertase and a mouse antibody that binds to human immunoglobulin (IgG) antibodies. They showed that the fusion protein bound to human IgGs and successfully produced glucose from sucrose. Next, the team made test strips with the SARS-CoV-2 spike protein on them. When dipped in COVID-19 patient samples, the patients’ SARS-CoV-2 antibodies bound to the spike protein. Adding the invertase/IgG fusion protein, then sucrose, led to the production of glucose, which could be detected by a glucose meter. They validated the test by performing the analysis with glucose meters on a variety of patient samples, and found that the new assay worked as well as four different ELISAs. The researchers say that the method can also be adapted to test for SARS-CoV-2 variants and other infectious diseases.

Source: American Chemical Society


记者: 胡喆、张泉、温竞华、王琳琳、徐鹏航 . . . . . . . . .



  中共中央宣传部6日举行“中国这十年”系列主题新闻发布会的第六场,聚焦“实施创新驱动发展战略 建设科技强国”。

科技事业蓝图已经画就 在不断向前发展






























Source : 新华网

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

Erik Brynjolfsson wrote . . . . . . . . .


In 1950, Alan Turing proposed a test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions were indistinguishable from a human’s? Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like–in fact, many of the most powerful systems are very different from humans–and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What is more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policy-makers.

Alan Turing was far from the first to imagine human-like machines. According to legend, 3,500 years ago, Dædalus constructed humanoid statues that were so lifelike that they moved and spoke by themselves. Nearly every culture has its own stories of human-like machines, from Yanshi’s leather man described in the ancient Chinese Liezi text to the bronze Talus of the Argonautica and the towering clay Mokkerkalfe of Norse mythology. The word robot first appeared in Karel Čapek’s influential play Rossum’s Universal Robots and derives from the Czech word robota, meaning servitude or work. In fact, in the first drafts of his play, Čapek named them labori until his brother Josef suggested substituting the word robot.

Of course, it is one thing to tell tales about humanoid machines. It is something else to create robots that do real work. For all our ancestors’ inspiring stories, we are the first generation to build and deploy real robots in large numbers. Dozens of companies are working on robots as human-like, if not more so, as those described in the ancient texts. One might say that technology has advanced sufficiently to become indistinguishable from mythology.

The breakthroughs in robotics depend not merely on more dexterous mechanical hands and legs, and more perceptive synthetic eyes and ears, but also on increasingly human-like artificial intelligence (HLAI). Powerful AI systems are crossing key thresholds: matching humans in a growing number of fundamental tasks such as image recognition and speech recognition, with applications from autonomous vehicles and medical diagnosis to inventory management and product recommendations.

These breakthroughs are both fascinating and exhilarating. They also have profound economic implications. Just as earlier general-purpose technologies like the steam engine and electricity catalyzed a restructuring of the economy, our own economy is increasingly transformed by AI. A good case can be made that AI is the most general of all general-purpose technologies: after all, if we can solve the puzzle of intelligence, it would help solve many of the other problems in the world. And we are making remarkable progress. In the coming decade, machine intelligence will become increasingly powerful and pervasive. We can expect record wealth creation as a result.

Replicating human capabilities is valuable not only because of its practical potential for reducing the need for human labor, but also because it can help us build more robust and flexible forms of intelligence. Whereas domain-specific technologies can often make rapid progress on narrow tasks, they founder when unexpected problems or unusual circumstances arise. That is where human-like intelligence excels. In addition, HLAI could help us understand more about ourselves. We appreciate and comprehend the human mind better when we work to create an artificial one.

These are all important opportunities, but in this essay, I will focus on the ways that HLAI could lead to a realignment of economic and political power.

The distributive effects of AI depend on whether it is primarily used to augment human labor or automate it. When AI augments human capabilities, enabling people to do things they never could before, then humans and machines are complements. Complementarity implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making. In contrast, when AI replicates and automates existing human capabilities, machines become better substitutes for human labor and workers lose economic and political bargaining power. Entrepreneurs and executives who have access to machines with capabilities that replicate those of humans for a given task can and often will replace humans in those tasks.

Automation increases productivity. Moreover, there are many tasks that are dangerous, dull, or dirty, and those are often the first to be automated. As more tasks are automated, a fully automated economy could, in principle, be structured to redistribute the benefits from production widely, even to those people who are no longer strictly necessary for value creation. However, the beneficiaries would be in a weak bargaining position to prevent a change in the distribution that left them with little or nothing. Their incomes would depend on the decisions of those in control of the technology. This opens the door to increased concentration of wealth and power.

This highlights the promise and the peril of achieving HLAI: building machines designed to pass the Turing Test and other, more sophisticated metrics of human-like intelligence. On the one hand, it is a path to unprecedented wealth, increased leisure, robust intelligence, and even a better understanding of ourselves. On the other hand, if HLAI leads machines to automate rather than augment human labor, it creates the risk of concentrating wealth and power. And with that concentration comes the peril of being trapped in an equilibrium in which those without power have no way to improve their outcomes, a situation I call the ­Turing Trap.

The grand challenge of the coming era will be to reap the unprecedented benefits of AI, including its human-like manifestations, while avoiding the Turing Trap. Succeeding in this task requires an understanding of how technological progress affects productivity and inequality, why the Turing Trap is so tempting to different groups, and a vision of how we can do better.

Artificial intelligence pioneer Nils Nilsson noted that “achieving real human-level AI would necessarily imply that most of the tasks that humans perform for pay could be automated.” In the same article, he called for a focused effort to create such machines, writing that “achieving human-level AI or ‘strong AI’ remains the ultimate goal for some researchers” and he contrasted this with “weak AI,” which seeks to “build machines that help humans.” Not surprisingly, given these monikers, work toward “strong AI” attracted many of the best and brightest minds to the quest of–implicitly or explicitly–fully automating human labor, rather than assisting or augmenting it.

For the purposes of this essay, rather than strong versus weak AI, let us use the terms automation versus augmentation. In addition, I will use HLAI to mean human-like artificial intelligence, not human-level AI, because the latter mistakenly implies that intelligence falls on a single dimension, and perhaps even that humans are at the apex of that metric. In reality, intelligence is multidimensional: a 1970s pocket calculator surpasses the most intelligent human in some ways (such as for multiplication), as does a chimpanzee (short-term memory). At the same time, machines and animals are inferior to human intelligence on myriad other dimensions. The term “artificial general intelligence” (AGI) is often used as a synonym for HLAI. However, taken literally, it is the union of all types of intelligences, able to solve types of problems that are solvable by any existing human, animal, or machine. That suggests that AGI is not human-like.

The good news is that both automation and augmentation can boost labor productivity: that is, the ratio of value-added output to labor-hours worked. As productivity increases, so do average incomes and living standards, as do our capabilities for addressing challenges from climate change and poverty to health care and longevity. Mathematically, if the human labor used for a given output declines toward zero, then labor productivity would grow to infinity.

The bad news is that no economic law ensures everyone will share this growing pie. Although pioneering models of economic growth assumed that technological change was neutral, in practice, technological change can disproportionately help or hurt some groups, even if it is beneficial on average.

In particular, the way the benefits of technology are distributed depends to a great extent on how the technology is deployed and the economic rules and norms that govern the equilibrium allocation of goods, services, and incomes. When technologies automate human labor, they tend to reduce the marginal value of workers’ contributions, and more of the gains go to the owners, entrepreneurs, inventors, and architects of the new systems. In contrast, when technologies augment human capabilities, more of the gains go to human workers.

A common fallacy is to assume that all or most productivity-enhancing innovations belong in the first category: automation. However, the second category, augmentation, has been far more important throughout most of the past two centuries. One metric of this is the economic value of an hour of human labor. Its market price as measured by median wages has grown more than tenfold since 1820. An entrepreneur is willing to pay much more for a worker whose capabilities are amplified by a bulldozer than one who can only work with a shovel, let alone with bare hands.

In many cases, not only wages but also employment grow with the introduction of new technologies. With the invention of the airplane, a new job category was born: pilots. With the invention of jet engines, pilot productivity (in passenger-miles per pilot-hour) grew immensely. Rather than reducing the number of employed pilots, the technology spurred demand for air travel so much that the number of pilots grew. Although this pattern is comforting, past performance does not guarantee future results. Modern technologies–and, more important, the ones under development–are different from those that were important in the past.

In recent years, we have seen growing evidence that not only is the labor share of the economy declining, but even among workers, some groups are beginning to fall even further behind. Over the past forty years, the numbers of millionaires and billionaires grew while the average real wages for Americans with only a high school education fell. Though many phenomena contributed to this, including new patterns of global trade, changes in technology deployment are the single biggest explanation.

If capital in the form of AI can perform more tasks, those with unique assets, talents, or skills that are not easily replaced with technology stand to benefit disproportionately.18 The result has been greater wealth concentration.

Ultimately, a focus on more human-like AI can make technology a better substitute for the many nonsuperstar workers, driving down their market wages, even as it amplifies the market power of a few. This has created a growing fear that AI and related advances will lead to a burgeoning class of unemployable or “zero marginal product” people.

As noted above, both automation and augmentation can increase productivity and wealth. However, an unfettered market is likely to create socially excessive incentives for innovations that automate human labor and provide too weak incentives for technology that augments humans. The first fundamental welfare theorem of economics states that under a particular set of conditions, market prices lead to a pareto optimal outcome: that is, one where no one can be made better off without making someone else worse off. But we should not take too much comfort in that. The theorem does not hold when there are innovations that change the production possibilities set or externalities that affect people who are not part of the market.

[ . . . . . . . . ]

In sum, the risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policy-makers.

The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation. We can work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans. The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.

While both approaches can and do contribute to productivity and progress, technologists, businesspeople, and policy-makers have each been putting a finger on the scales in favor of replacement. Moreover, the tendency of a greater concentration of technological and economic power to beget a greater concentration of political power risks trapping a powerless majority into an unhappy equilibrium: the Turing Trap.

The backlash against free trade offers a cautionary tale. Economists have long argued that free trade and globalization tend to grow the economic pie through the power of comparative advantage and specialization. They have also acknowledged that market forces alone do not ensure that every person in every country will come out ahead. So they proposed a grand bargain: maximize free trade to maximize wealth creation and then distribute the benefits broadly to compensate any injured occupations, industries, and regions. It has not worked as they had hoped. As the economic winners gained power, they reneged on the second part of the bargain, leaving many workers worse off than before.48 The result helped fuel a populist backlash that led to import tariffs and other barriers to free trade. Economists wept.

Some of the same dynamics are already underway with AI. More and more Americans, and indeed workers around the world, believe that while the technology may be creating a new billionaire class, it is not working for them. The more technology is used to replace rather than augment labor, the worse the disparity may become, and the greater the resentments that feed destructive political instincts and actions. More fundamentally, the moral imperative of treating people as ends, and not merely as means, calls for everyone to share in the gains of automation.

The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. A good start would be to replace the Turing Test, and the mindset it embodies, with a new set of practical benchmarks that steer progress toward AI-powered systems that exceed anything that could be done by humans alone. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.

Source : Dædalus

Infographic: Companies with the Most Patents Granted in 2021

See large image . . . . . .

Source : Visual Capitalist