Data, Info and News of Life and Economy

Tag Archives: AI

World AI Conference Technological Innovation Highlights

Artificial intelligence is transitioning into a new era of technological developments as both industries and consumers embrace new innovations, industry leaders said during the World Artificial Intelligence Conference 2022 in Shanghai.

The three-day conference that concluded Saturday brought together key industrial experts and academics, while showcasing the latest products from more than 200 exhibitors across sectors including finance, health, transportation, and culture.

Sixth Tone lists three technological innovations that are reinventing people’s lives and the future.

Fall detection camera

Chinese tech giant Tencent has developed a smart camera that can spot a fall by detecting abrupt changes in a person’s body movements. Dubbed the “Invisible Guardian,” the system will immediately alert a designated contact via a text message and phone call once a fall is confirmed.

Zhang Jiapei, a representative of the company, told Sixth Tone that the product aims to protect seniors, especially those living alone. In addition to visual detections at all times, Zhang said an updated version of the product would also trigger rescue alerts when users call for help using certain words.

Falls are the second leading cause of unintentional injury deaths globally, with people over the age of 60 accounting for half of such fatalities, according to the World Health Organization. The camera’s fall detection accuracy has reached over 90%, the company said, adding that it would first promote the device in public elderly care homes in Shenzhen, before following it up in other cities.

“We want seniors, especially those living alone, to be noticed immediately and sent for treatment when they fall,” Zheng said. “As the country has an increasingly aging population, we hope the device can help safeguard their lives.”

Anti-fraud system for digital transactions

Alibaba’s fintech spin-off Ant Group revealed its intelligent risk control system for quickly detecting and intervening in fraudulent digital transactions. One of the key features in the multi-layered mechanism is a 90-second alert call by the platform’s AI assistant.

Zhao Ke, a representative of Ant Group, told Sixth Tone that the system could provide tailored evaluations by analyzing real-time feedback from users during the call, and terminate the process when necessary.

“With this 90-second pause, we want to make our users understand what’s going on by getting them away from the fraudulent transaction where they’re being urged to quickly complete the transaction,” Zhao said, adding that communication could provide more information and improve the accuracy of the system.

Other anti-fraud innovations include a 15-minute “cooldown period” and a 24-hour transfer delay. The company said the AI-powered system has encompassed all kinds of fraud released by police officials in its database and sent more than 500,000 alerts every day across all its applicable platforms, including the popular digital payment app Alipay.

Robot COVID-19 tester

Flexiv, a Shanghai-based robotics enterprise, has developed a robot that can complete an automated COVID-19 test, from picking up a cotton swab to placing the sample in a tube.

Equipped with visual recognition technologies and highly flexible force sensors, the mechanical arm can identify the right point to swab people of different heights, while ensuring it is done in a safe and comfortable way. The swab takes around 28 seconds to finish and the robot can test 240 people in two hours, a member of staff from the company told Sixth Tone.

As frequent COVID-19 testing has become a pivotal tool in containing outbreaks, China is seeking to free medical workers from mass testing and reduce their exposure to the virus. In August, the country published its first industrial standard on sampling machines, which helped review the existing products and regulate the sector.

Source : Sixth Tone

China Achieves ‘Brain-scale’ AI with Latest Supercomputer

Anthony Cuthbertson wrote . . . . . . . . .

Computer scientists in China claim to have run an artificial intelligence program using architecture that is as complex as the human brain.

The AI model, named ‘bagualu’ or ‘alchemist’s pot’, was run on the latest generation of the Sunway supercomputer based at the National Supercomputing Center in the eastern province of Jiangsu.

Researchers described it as a “brain-scale” AI model, which could have applications in fields ranging from self-driving vehicles and computer vision, to chemistry and other scientific discoveries.

The Sunway TaihuLight is officially ranked as the fourth most powerful supercomputer in the world, however the researchers claim the latest demonstration puts it on a par with the US Frontier, which currently tops the list.

The Sunway was ranked as the most powerful computer in the world between 2016 and 2018, according to the Top500 list of leading supercomputers, however Chinese institutions no longer submit performance data to the list. China still has 173 supercomputers listed in the Top500 rankings, more than any other country.

The South China Morning Post reported that the bagualu programme ran with 174 trillion parameters, rivalling the number of synapses in the human brain by some estimates.

The publication reported that the Sunway supercomputer has more than 37 million CPU cores, which is four times as many as the Frontier cupercomputer in the US.

It also has nine petabytes of memory, which is the equivalent of more than 2 million HD movies.

One researcher said that its power gave the latest Sunway the ability to perform parallel computing in a way that mimicked human thinking, claiming it was “like eating while watching television”.

The results were presented at the Principles and Practice of Parallel Programming 2022 conference hosted by the US Association for Computing Machinery in April, but were not reported on at the time.

Source : Yahoo!

China Launches Drone Ship That Acts As A Mothership For More Drones

Joseph Trevithick and Oliver Parken wrote . . . . . . . . .

China has launched a huge ‘drone ship,’ ostensibly designed for marine research purposes. The vessel, which is claimed to feature an advanced artificial intelligence operating system that allows for at least semi-autonomous operation, could also be employed in military contexts as a hub for various unmanned weapons and surveillance systems.

News of the launch, which took place on May 18, came via the South China Morning Post. The vessel, named Zhu Hai Yun, has been widely reported as the world’s first unmanned drone ship based on coverage by the South China Morning Post. Although other examples of unmanned surface vessels, or USVs, have become fairly common in recent years, Zhu Hai Yun is said to boast a custom artificial intelligence (AI) operations system to support its function as a mothership to various unmanned platforms, including aerial drones and submersibles. These combined capabilities will render it a powerful ocean research tool according to a report from Science and Technology Daily – the official journal of China’s Ministry of Science and Technology – accessed by the South China Morning Post. Clearly, there could be major military applications for it, as well.

The ship constitutes a new “marine species” according to Chen Dake, director of the Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) of Sun Yat-sen University responsible for developing the ship’s AI system, branded the Intelligent Mobile Ocean Stereo Observing System (IMOSOS).

Quoted in Science and Technology Daily back in 2021, Chen stressed the revolutionary potential of the ship as the nerve center of an interconnected range of observational capabilities.

“The intelligent, unmanned ship…will bring revolutionary changes for ocean observation.”

Huangpu Wenchong Shipping Company, which built the ship, claims that when deployed, the vessel can undertake “three-dimensional dynamic observations” of specific target areas using unmanned aircraft, boats, and submersibles. Huangpu Wenchong is a subsidiary of China State Shipbuilding Corporation (CSSC,) the country’s largest ship manufacturer.

Construction of the Zhu Hai Yun began in July of 2021 in Guangzhou, northwest of Hong Kong. The ship measures 290 feet long, 46 feet wide, and 20 feet deep. It features a wide deck, aiding its ability to carry various platforms. The ship can sail at a top speed of 18 knots, with a designed displacement of 2,000 tons.

It’s likely, as wider coverage on Zhu Hai Yun also indicates, that the ship may operate semi-autonomously. In order for it to navigate busy port areas, a crew would take control of the ship’s navigation system via remote control or by being physically present on board to at least monitor it during these complex navigational phases. In this sense, control of the vessel would be split between human operators and the autonomous AI system.

Following the completion of testing and sea trials, Zhu Hai Yun is expected to be delivered by the end of 2022.

The combination of the ship’s new AI system, plus its ability to carry various unmanned capabilities, renders it an important tool for marine observation according to Science and Technology Daily, with important capabilities for China’s marine conservation and disaster prevention. Yet Zhu Hai Yun’s AI and drone-carrying capabilities have the potential to perform secondary military functions, particularly in terms of searching for targets of interest and coordinating persistent observation of those targets.

Other Chinese firms have already begun developing unmanned surface vessels for security-specific missions. Yunzhou Tech, a leading developer of unmanned surface vehicles, revealed six high-speed unmanned vessels in late-2021, designed to “quickly intercept, besiege and expel” unspecified maritime intruders. Its “dynamic cooperative confrontation technology,” or “swarming” technology, allows drone ships to engage hostile targets in a coordinated manner without the need for manual control. Back in 2018, Yunzhou Tech undertook a collaborative demonstration of a huge 56-boat swarm of unmanned vessels for various conflict control and resolution tests. Zhu Hai Yun, or other vessels like it, could in theory coordinate these sorts of ship drone swarms.

Being able to gather “three-dimensional dynamic observations” would prove particularly significant for China’s Navy should it become involved in a conflict in the Pacific. As the report on the USS Connecticut accident underscores, highly accurate underwater navigation data is especially important for safe submarine operations under the waves. With accurate charts on the topography of the seabed readily available, thanks to a vessel such as Zhu Hai Yun, Chinese submarines would be able to improve mission planning and navigational flexibility.

In addition, the utility of a vessel such as Zhu Hai Yun would extend above the waves, too. Wide-area surveillance, with the possibility of geo-location sharing, would allow the Chinese Navy to seek out, as well as directly target, adversary vessels or other objects of interest within the vast expanse of the Pacific ocean via the employment of drones swarms or other weapons. These are capabilities that are likely to be critical in any future conflicts that China wages, including over the island of Taiwan. U.S. military wargaming around scenarios involving the defense of Taiwan in recent years has highlighted the immense value that drone swarms would offer the other side when used as distributed sensing networks, as you can read more about here.

The Chinese government has been investing significant resources into research and development of unmanned technology (to include swarming capabilities) and AI/machine learning, as of late. It has simultaneously worked on small drone swarming capabilities as well as platforms able to field aerial drones at sea. Just last year, it was revealed that China launched a catamaran mothership intended to field and recover fleets of small aerial drones, as well as issue electronic communication attacks on vessels for training and potential wartime use.

While the launch of Zhu Hai Yun has been reported as a triumph for Chinese marine research, particularly by state-run publications, this would not be the first time China has presented new maritime technology with secondary military functions as ‘ocean research.’ In 2017, for example, news emerged that Chinese plans to establish a network of underwater sensors, ostensibly for ‘environmental research,’ may have had anti-submarine warfare applications. Moreover, in 2020, The War Zone also reported that the state-run Chinese Academy of Sciences (CAS) use of ‘Sea Wing’ UUV may have been used for something more than environmental research.

At the same time, it is important to note that Zhu Hai Yun is only one ship. However, the experience gained in its development, construction, and eventual employment, regardless of how it is utilized, is all but certain to feed into other commercial and military work on unmanned surface vessels, autonomy, and other related technologies. This function as an experimental technology demonstrator, akin in some ways to the vessels tested as part of the U.S. military’s Ghost Fleet Overlord program, is likely to be just as important as whatever it might end up doing operationally.

Zhu Hai Yun clearly has huge potential for maritime defense – both in terms of fielding weapons, as well as obtaining critical surveillance. Whatever purpose the ship ends up serving in a military context, its launch underscores China’s recent efforts to dominate AI, particularly in terms of using it to address defense and national security concerns, as well as unmanned technologies.

Source : The Drive

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

Erik Brynjolfsson wrote . . . . . . . . .


In 1950, Alan Turing proposed a test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions were indistinguishable from a human’s? Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like–in fact, many of the most powerful systems are very different from humans–and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What is more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policy-makers.

Alan Turing was far from the first to imagine human-like machines. According to legend, 3,500 years ago, Dædalus constructed humanoid statues that were so lifelike that they moved and spoke by themselves. Nearly every culture has its own stories of human-like machines, from Yanshi’s leather man described in the ancient Chinese Liezi text to the bronze Talus of the Argonautica and the towering clay Mokkerkalfe of Norse mythology. The word robot first appeared in Karel Čapek’s influential play Rossum’s Universal Robots and derives from the Czech word robota, meaning servitude or work. In fact, in the first drafts of his play, Čapek named them labori until his brother Josef suggested substituting the word robot.

Of course, it is one thing to tell tales about humanoid machines. It is something else to create robots that do real work. For all our ancestors’ inspiring stories, we are the first generation to build and deploy real robots in large numbers. Dozens of companies are working on robots as human-like, if not more so, as those described in the ancient texts. One might say that technology has advanced sufficiently to become indistinguishable from mythology.

The breakthroughs in robotics depend not merely on more dexterous mechanical hands and legs, and more perceptive synthetic eyes and ears, but also on increasingly human-like artificial intelligence (HLAI). Powerful AI systems are crossing key thresholds: matching humans in a growing number of fundamental tasks such as image recognition and speech recognition, with applications from autonomous vehicles and medical diagnosis to inventory management and product recommendations.

These breakthroughs are both fascinating and exhilarating. They also have profound economic implications. Just as earlier general-purpose technologies like the steam engine and electricity catalyzed a restructuring of the economy, our own economy is increasingly transformed by AI. A good case can be made that AI is the most general of all general-purpose technologies: after all, if we can solve the puzzle of intelligence, it would help solve many of the other problems in the world. And we are making remarkable progress. In the coming decade, machine intelligence will become increasingly powerful and pervasive. We can expect record wealth creation as a result.

Replicating human capabilities is valuable not only because of its practical potential for reducing the need for human labor, but also because it can help us build more robust and flexible forms of intelligence. Whereas domain-specific technologies can often make rapid progress on narrow tasks, they founder when unexpected problems or unusual circumstances arise. That is where human-like intelligence excels. In addition, HLAI could help us understand more about ourselves. We appreciate and comprehend the human mind better when we work to create an artificial one.

These are all important opportunities, but in this essay, I will focus on the ways that HLAI could lead to a realignment of economic and political power.

The distributive effects of AI depend on whether it is primarily used to augment human labor or automate it. When AI augments human capabilities, enabling people to do things they never could before, then humans and machines are complements. Complementarity implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making. In contrast, when AI replicates and automates existing human capabilities, machines become better substitutes for human labor and workers lose economic and political bargaining power. Entrepreneurs and executives who have access to machines with capabilities that replicate those of humans for a given task can and often will replace humans in those tasks.

Automation increases productivity. Moreover, there are many tasks that are dangerous, dull, or dirty, and those are often the first to be automated. As more tasks are automated, a fully automated economy could, in principle, be structured to redistribute the benefits from production widely, even to those people who are no longer strictly necessary for value creation. However, the beneficiaries would be in a weak bargaining position to prevent a change in the distribution that left them with little or nothing. Their incomes would depend on the decisions of those in control of the technology. This opens the door to increased concentration of wealth and power.

This highlights the promise and the peril of achieving HLAI: building machines designed to pass the Turing Test and other, more sophisticated metrics of human-like intelligence. On the one hand, it is a path to unprecedented wealth, increased leisure, robust intelligence, and even a better understanding of ourselves. On the other hand, if HLAI leads machines to automate rather than augment human labor, it creates the risk of concentrating wealth and power. And with that concentration comes the peril of being trapped in an equilibrium in which those without power have no way to improve their outcomes, a situation I call the ­Turing Trap.

The grand challenge of the coming era will be to reap the unprecedented benefits of AI, including its human-like manifestations, while avoiding the Turing Trap. Succeeding in this task requires an understanding of how technological progress affects productivity and inequality, why the Turing Trap is so tempting to different groups, and a vision of how we can do better.

Artificial intelligence pioneer Nils Nilsson noted that “achieving real human-level AI would necessarily imply that most of the tasks that humans perform for pay could be automated.” In the same article, he called for a focused effort to create such machines, writing that “achieving human-level AI or ‘strong AI’ remains the ultimate goal for some researchers” and he contrasted this with “weak AI,” which seeks to “build machines that help humans.” Not surprisingly, given these monikers, work toward “strong AI” attracted many of the best and brightest minds to the quest of–implicitly or explicitly–fully automating human labor, rather than assisting or augmenting it.

For the purposes of this essay, rather than strong versus weak AI, let us use the terms automation versus augmentation. In addition, I will use HLAI to mean human-like artificial intelligence, not human-level AI, because the latter mistakenly implies that intelligence falls on a single dimension, and perhaps even that humans are at the apex of that metric. In reality, intelligence is multidimensional: a 1970s pocket calculator surpasses the most intelligent human in some ways (such as for multiplication), as does a chimpanzee (short-term memory). At the same time, machines and animals are inferior to human intelligence on myriad other dimensions. The term “artificial general intelligence” (AGI) is often used as a synonym for HLAI. However, taken literally, it is the union of all types of intelligences, able to solve types of problems that are solvable by any existing human, animal, or machine. That suggests that AGI is not human-like.

The good news is that both automation and augmentation can boost labor productivity: that is, the ratio of value-added output to labor-hours worked. As productivity increases, so do average incomes and living standards, as do our capabilities for addressing challenges from climate change and poverty to health care and longevity. Mathematically, if the human labor used for a given output declines toward zero, then labor productivity would grow to infinity.

The bad news is that no economic law ensures everyone will share this growing pie. Although pioneering models of economic growth assumed that technological change was neutral, in practice, technological change can disproportionately help or hurt some groups, even if it is beneficial on average.

In particular, the way the benefits of technology are distributed depends to a great extent on how the technology is deployed and the economic rules and norms that govern the equilibrium allocation of goods, services, and incomes. When technologies automate human labor, they tend to reduce the marginal value of workers’ contributions, and more of the gains go to the owners, entrepreneurs, inventors, and architects of the new systems. In contrast, when technologies augment human capabilities, more of the gains go to human workers.

A common fallacy is to assume that all or most productivity-enhancing innovations belong in the first category: automation. However, the second category, augmentation, has been far more important throughout most of the past two centuries. One metric of this is the economic value of an hour of human labor. Its market price as measured by median wages has grown more than tenfold since 1820. An entrepreneur is willing to pay much more for a worker whose capabilities are amplified by a bulldozer than one who can only work with a shovel, let alone with bare hands.

In many cases, not only wages but also employment grow with the introduction of new technologies. With the invention of the airplane, a new job category was born: pilots. With the invention of jet engines, pilot productivity (in passenger-miles per pilot-hour) grew immensely. Rather than reducing the number of employed pilots, the technology spurred demand for air travel so much that the number of pilots grew. Although this pattern is comforting, past performance does not guarantee future results. Modern technologies–and, more important, the ones under development–are different from those that were important in the past.

In recent years, we have seen growing evidence that not only is the labor share of the economy declining, but even among workers, some groups are beginning to fall even further behind. Over the past forty years, the numbers of millionaires and billionaires grew while the average real wages for Americans with only a high school education fell. Though many phenomena contributed to this, including new patterns of global trade, changes in technology deployment are the single biggest explanation.

If capital in the form of AI can perform more tasks, those with unique assets, talents, or skills that are not easily replaced with technology stand to benefit disproportionately.18 The result has been greater wealth concentration.

Ultimately, a focus on more human-like AI can make technology a better substitute for the many nonsuperstar workers, driving down their market wages, even as it amplifies the market power of a few. This has created a growing fear that AI and related advances will lead to a burgeoning class of unemployable or “zero marginal product” people.

As noted above, both automation and augmentation can increase productivity and wealth. However, an unfettered market is likely to create socially excessive incentives for innovations that automate human labor and provide too weak incentives for technology that augments humans. The first fundamental welfare theorem of economics states that under a particular set of conditions, market prices lead to a pareto optimal outcome: that is, one where no one can be made better off without making someone else worse off. But we should not take too much comfort in that. The theorem does not hold when there are innovations that change the production possibilities set or externalities that affect people who are not part of the market.

[ . . . . . . . . ]

In sum, the risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policy-makers.

The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation. We can work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans. The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.

While both approaches can and do contribute to productivity and progress, technologists, businesspeople, and policy-makers have each been putting a finger on the scales in favor of replacement. Moreover, the tendency of a greater concentration of technological and economic power to beget a greater concentration of political power risks trapping a powerless majority into an unhappy equilibrium: the Turing Trap.

The backlash against free trade offers a cautionary tale. Economists have long argued that free trade and globalization tend to grow the economic pie through the power of comparative advantage and specialization. They have also acknowledged that market forces alone do not ensure that every person in every country will come out ahead. So they proposed a grand bargain: maximize free trade to maximize wealth creation and then distribute the benefits broadly to compensate any injured occupations, industries, and regions. It has not worked as they had hoped. As the economic winners gained power, they reneged on the second part of the bargain, leaving many workers worse off than before.48 The result helped fuel a populist backlash that led to import tariffs and other barriers to free trade. Economists wept.

Some of the same dynamics are already underway with AI. More and more Americans, and indeed workers around the world, believe that while the technology may be creating a new billionaire class, it is not working for them. The more technology is used to replace rather than augment labor, the worse the disparity may become, and the greater the resentments that feed destructive political instincts and actions. More fundamentally, the moral imperative of treating people as ends, and not merely as means, calls for everyone to share in the gains of automation.

The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. A good start would be to replace the Turing Test, and the mindset it embodies, with a new set of practical benchmarks that steer progress toward AI-powered systems that exceed anything that could be done by humans alone. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.

Source : Dædalus

China Is Using AI and 3D Printing to Build a 590-foot-tall Dam on the Tibetan Plateau

Matthew Loh wrote . . . . . . . . .

Chinese scientists say they’re 3D printing a 590-foot-tall dam by 2024 using AI and robots.

The project will use an AI system with unmanned trucks, bulldozers, rollers, and other equipment.

The researchers say their method eliminates human error and safety concerns for workers.

China is poised to build a hydropower dam in two years using artificial intelligence, construction robots, and zero human labor, scientists involved in the project said.

The Yangqu dam on the Tibetan plateau is set to be assembled layer by layer, like with 3D printing, The South China Morning Post first reported on Sunday, citing a paper published in April in the peer-reviewed Journal of Tsinghua University (Science and Technology).

If and when it is completed, the ambitious project will likely be the world’s tallest structure built using 3D printing processes. The current record is held by a two-story office building in Dubai, which stands 20 feet high.

However, the paper said the Yangqu dam will be 590 feet tall. By comparison, the Hoover Dam’s structural height is 726 feet.

At Yangqu, a central AI system will be used to oversee a massive automated assembly line that starts with a fleet of unmanned trucks used to transport construction materials to parts of the worksite, per the scientists.

Once the materials arrive, unmanned bulldozers and pavers will turn them into a layer of the dam, and then rollers equipped with sensors will help to press each layer so that they become firm and durable, they said.

Per the paper, when a layer is complete, the robots will send information about the state of construction back to the AI system.

However, the mining of the construction material will still have to be done manually, the researchers noted.

The AI system and its army of robots will help eliminate human error, such as when roller operators don’t keep to a straight line or when truck drivers deliver materials to the wrong spot, said lead author Liu Tianyun of Tsinghua University, according to SCMP.

The system will also allow on-site work to progress continuously without safety concerns for human workers, the researchers said, per the outlet.

According to the scientists, the completed Yangqu dam will provide 5 billion kilowatt-hours of power every year to China.

If successful, the building method could provide a blueprint for other construction projects, such as road construction, Liu’s team said, as reported by SCMP.

China, which is facing a plummeting birth rate and possible labor shortages, has in recent years turned to automation to keep its industries going.

Source : Business Insider

In Pictures: These 3-Michelin-starred Plates Were Invented by AI. The Food Doesn’t Even Exist.

In the World’s Fastest Drummer, Scientists See a Bionics Breakthrough

Michaela Haas wrote . . . . . . . . .

With an ear-deafening bang, Jason Barnes’ world exploded in January, 2012. “I saw a pink flash and thought a bomb went off,” he says about that life-altering moment. The then-22-year-old was cleaning the exhaust duct at a restaurant near Atlanta when a transformer defaulted and sent 22,000 volts through the right side of his body, a load that could have killed him.

“I was standing with rubber boots in water, so I was not grounded,” he remembers. “I got completely cooked.” He survived, but after the seventh surgery the doctors concluded that his right hand could not be saved, and amputated it below the elbow.

Before the accident, the lanky blond musician with the tattooed arms and the black plug earrings had played the piano, the guitar and the drums. “The drums were the most important thing in my life,” he says. Now his dream of pursuing a professional career as a drummer seemed to be over. For several months, he became so depressed he barely got up from his bed.

“My world collapsed.” He tried to find the upside: “Most people who take such a massive hit lose all four limbs. I was lucky to only lose one hand.” Three months after the accident, he dragged his drum kit out of the garage, fastened a stick to his bandages and played, “out of pure boredom and despair.” But it hurt like hell, and he didn’t have enough grip.

His drum teacher introduced him to Gil Weinberg, the founding director of the Georgia Tech Center for Music Technology and an eminent authority on artificial intelligence. The Israeli-born computer scientist had ambitions of becoming a concert pianist, but while he studied in Tel Aviv and then at the Massachusetts Institute of Technology (MIT), he became fascinated with artificial intelligence.

Combining his two passions, he has created experimental “robot musicians” such as the marimba-playing Shimon who writes his own lyrics and has his own Spotify account. With artificial intelligence, Shimon plays and improvises with human bandmates. When listeners close their eyes, they usually think it’s real people jamming.

Over the course of several months, Weinberg combined his musical talents with his A.I. skills and developed bionic prosthetics tailor-made for Barnes to play the drums. “We use electromyography (EMG),” Barnes explains. “Its sensors on my upper arm muscles pick up signals from my residual limb. I can flex my muscles to tighten my grip, and when I relax my muscles, it loosens the grip.”

The bionic arm includes two drumsticks — one that translates Barnes’s muscle movements and a second, autonomous stick that has been “trained” by machine learning. After being fed many hours of improvisations of jazz greats such as John Coltrane and Thelonious Monk, the A.I. drumstick can improvise on its own. “It’s almost like having a second, bionic drummer in the band,” Barnes half-jokes. With his A.I. arm, Barnes even made it into the Guinness Book of World Records as the world’s fastest drummer, achieving 20 hits per second with each stick.

“That was cool,” he said, but what matters more to him is how the technology has transformed his entire career, benefits him every day in his life and has the potential to benefit others. The “cyborg drummer,” as some media outlets nicknamed him, is now living his dream as a full-time producer and songwriter, and has performed internationally under his stage name Cybrnetx, including at the Kennedy Center in Washington DC. Via Zoom, he proudly shows off his brand new recording studio near Atlanta, which he built himself.

With a quick flip from his left hand, he rotates his artificial right hand out of its metal socket and demonstrates how he can exchange his “everyday hand” with a “music hand.” “It just honestly works insanely well,” he says about the various prosthetic hands that allow him not only to play music but also to tinker with his car and regulate his sound mixers. “The only thing I can’t do is clip my fingernails,” he quips.

It is hard to imagine how different Barnes’ life would have evolved, had he not made the connection with Gil Weinberg. Barnes is the beneficiary of high-end research that might one day ease challenges not only amputees face in accomplishing very precise tasks. “People are always afraid robots will take away our jobs,” Weinberg says. “Here, we have a human who wouldn’t be able to work in his job without robotics.”

Weinberg is convinced that the astonishing progress in bionics opens enormous possibilities for other clients, for instance, people who have suffered traumatic brain injuries and need to relearn movements. A different team at Georgia Tech has invented a glove that teaches people to play the piano or learn Braille within a few hours (it usually takes several months) using haptic passive learning. “You need incredible precision and dexterity to make music,” says Weinberg, “so when we make it possible for them to play the piano, they can do pretty much everything else such as typing on a keyboard or doing mechanical work.”

To this end, his team has also created a black arm called the “Skywalker” with which Barnes can control each finger separately to play the piano. Star Wars hero Luke Skywalker famously lost his right hand in a struggle with Darth Vader and replaced it with a prosthetic that let him grip and feel. Ultrasound sensors translate the muscle tension in Barnes’ upper arm into subtle signals and finger movements. It might not be enough for Carnegie Hall, but he can play the title score of Star Wars and Beethoven’s Ode to Joy.

“Identifying where we can create something unique, where robots inspire and push us to uncharted territories is tricky,” Weinberg says, “because we don’t want people to think we are replacing musicians.” He is fascinated by exploring new forms of creative expression through the interaction between human artists and robots. In his latest production, FOREST, he “trained” robots and humans to “trust” each other and dance together, emulating and responding to each others’ movements.

“It’s like having these band members who are not quite opening up to you yet, but they’re good musicians,” Barnes describes the experience. “We kind of vibe together.”

With the help of A.I., machines can already paint like Rembrandt, write entire articles, or improvise like John Coltrane. At the same time, blind people can learn to see with bionic eyes and amputees can run and jump on cyborg legs. The next frontier in sophisticated prosthetics is achieved through brain implants. At the University of Pittsburgh, researchers tacked rice-sized electrode arrays into the cortex of a partly paralyzed young man, allowing him to control his paralyzed hand with motion commands and touch feedback.

Atom Limbs, a new startup, is trying to bring mind-controlled prosthetics to the market by 2023 with a technology developed at Johns Hopkins’ Applied Physics Laboratory with a $120 million grant from the Defense Department. 200 sensors give very precise, detailed sensory feedback. (For comparison, Barnes’ current prosthetic has eight sensors.) Gil Weinberg does not want to go as far as implanting brain chips, but Barnes is not afraid to experiment and is eagerly trying to get on Atom Limb’s waitlist.

Because high-tech prosthetics are unaffordable for most people, other inventors are trying to make simpler versions available through Open Source technology. French inventor Nicolas Huchet who lost his right hand in a work accident, developed his own smart prosthetics and is now leading the nonprofit My Human Kit that prints prosthetics for less than $1,000 through Open Source technology and 3D-printers.

The cost is the biggest disadvantage of Barnes‘ superhuman drumming hand, too: Because it is owned by Georgia Tech and cost more than $100,000, the insurance company does not allow him to take it home or play with it outside of Georgia Tech projects. Therefore, Barnes has transitioned to much simpler 3D-printed models for his other music projects and as his “everyday hand.”

After an attempt to fundraise for him on Kickstarter failed, Google and Georgia Tech collaborated on building him his newest “music hand” with TensorFlow that he can take home. “It uses artificial intelligence, but does not play anything on its own,” Barnes explains. “It’s able to read my muscles and determine what I’m trying to do.”

He and the team are still working to improve it, making it sleeker and lighter, hopefully in time for Barnes to play at the opening of the Invictus Games in The Hague this year. “In the ’90s, this stuff was science-fiction,“ Barnes marvels. “When you look at the first Star Wars movies, they seemed incredible. Now it’s happening.“

Barnes is fascinated by the possibilities of body hacking. He already has an NFC implant, a chip under the skin above his left thumb, with which he can open his studio door and make online payments. Barnes believes that in the future, “everyone will be a bit of a cyborg. Those movies inspired people to dream up this kind of technology and make it a reality. My ultimate goal was always to play music and go places. This is all I ever wanted in the first place. It’s all coming to fruition.”

May the Force be with him.

Source : Reasons to be Cheerful

AI Can Identify Heart Disease from an Eye Scan

Scientists have developed an artificial intelligence system that can analyse eye scans taken during a routine visit to an optician or eye clinic and identify patients at a high risk of a heart attack.

Doctors have recognised that changes to the tiny blood vessels in the retina are indicators of broader vascular disease, including problems with the heart.

In the research, led by the University of Leeds, deep learning techniques were used to train an AI system to automatically read retinal scans and identify those people who, over the following year, were likely to have a heart attack.

Deep learning is a complex series of algorithms that enable computers to identify patterns in data and to make predictions.

Writing in the journal Nature Machine Intelligence, the researchers report in their paper – Predicting Infarction through your retinal scans and minimal personal Information – that the AI system had an accuracy of between 70% and 80% and could be used as a second referral mechanism for in-depth cardiovascular examination.

The use of deep learning in the analysis of retinal scans could revolutionise the way patients are regularly screened for signs of heart disease.

Earlier identification of heart disease

Professor Alex Frangi, who holds the Diamond Jubilee Chair in Computational Medicine in the School of Computing at the University of Leeds and is a Turing Fellow at the Alan Turing Institute, supervised the research. He said: “Cardiovascular diseases, including heart attacks, are the leading cause of early death worldwide and the second-largest killer in the UK. This causes chronic ill-health and misery worldwide.

“This technique opens-up the possibility of revolutionising the screening of cardiac disease. Retinal scans are comparatively cheap and routinely used in many optician practices. As a result of automated screening, patients who are at high risk of becoming ill could be referred for specialist cardiac services.

“The system could also be used to track early signs of heart disease.”

The study involved a worldwide collaboration of scientists, engineers and clinicians from the University of Leeds; Leeds Teaching Hospitals’ NHS Trust; the University of York; the Cixi Institute of Biomedical Imaging in Ningbo, part of the Chinese Academy of Sciences; the University of Cote d’Azur, France; the National Centre for Biotechnology Information and the National Eye Institute, both part of the National Institutes for Health in the US; and KU Leuven in Belgium.

The UK Biobank provided data for the study.

Chris Gale, Professor of Cardiovascular Medicine at the University of Leeds and a Consultant Cardiologist at Leeds Teaching Hospitals NHS Trust, was one of the authors of the research paper.

He said: “The AI system has the potential to identify individuals attending routine eye screening who are at higher future risk of cardiovascular disease, whereby preventative treatments could be started earlier to prevent premature cardiovascular disease.”

Deep learning

During the deep learning process, the AI system analysed the retinal scans and cardiac scans from more than 5,000 people. The AI system identified associations between pathology in the retina and changes in the patient’s heart.

Once the image patterns were learned, the AI system could estimate the size and pumping efficiency of the left ventricle, one of the heart’s four chambers, from retinal scans alone. An enlarged ventricle is linked with an increased risk of heart disease.

With information on the estimated size of the left ventricle and its pumping efficiency combined with basic demographic data about the patient, their age and sex, the AI system could make a prediction about their risk of a heart attack over the subsequent 12 months.

Currently, details about the size and pumping efficiency of a patient’s left ventricle can only be determined if they have diagnostic tests such as echocardiography or magnetic resonance imaging of the heart. Those diagnostic tests can be expensive and often only available in a hospital setting, making them inaccessible for people in countries with less well-resourced healthcare systems – or unnecessarily increasing healthcare costs and waiting times in developed countries.

Sven Plein, British Heart Foundation Professor of Cardiovascular Imaging at the University of Leeds and one of the authors of the research paper, said: “The AI system is an excellent tool for unravelling the complex patterns that exist in nature, and that is what we have found – the intricate pattern of changes in the retina linked to changes in the heart.”

Source: University of Leeds

Chinese Company Names AI Debt Collector Employee of the Year

Jiang Yaling wrote . . . . . . . . .

Chinese real estate giant Vanke said its best employee of 2021 was not a human.

The company declared an artificial intelligence-powered debt collector named Cui Xiaopan as its employee of the year, Sixth Tone’s sister publication The Paper reported Tuesday, citing the company’s top executive. Developed by an in-house team using toolkits from Xiaoice, an AI system owned by Microsoft, Cui is depicted as a young woman and joined Vanke’s accounting department in February.

“Under the support of systematic algorithms, she quickly learned the methods of humans to discover problems in work procedures and data and has displayed her skills hundreds of thousands of times more than humans,” Yu Liang, chairman of the board of directors of Vanke, wrote in a social media post on Dec. 20, adding that Cui has a 91.44% success rate in collecting overdue payments.

China’s AI software market is escalating and is estimated to value around 23 billion yuan ($3.6 billion) by 2030, with “virtual humans” emerging as the most popular application, according to the market research firm International Data Corporation.

“Although the industry’s growth has been dampened by COVID-19, users’ awareness of AI and the technology’s applications are becoming more sophisticated, which will result in steady growth industry-wide,” the firm wrote in a 2021 report.

Before Cui, so-called virtual humans have previously been seen as a news anchor for the state-run Xinhua News Agency, a computer science student at the prestigious Tsinghua University in Beijing, and even a chat bot providing company to lonely men. The latter two projects were both supported by Xiaoice.

Source : Sixth Tone


记者: 张弩 . . . . . . . . .


每经AI电视上线 进入视频播放“无人化”时代













2021年,每经结合专业财媒的采编经验,独立搭建完成 “每经智能采编发平台”,再造采编发流程,每经AI审核系统、AI智能制图、AI辅助写稿获得3项国家版权局计算机软件著作权登记证书。







Source : Sohu