Data, Info and News of Life and Economy

Tag Archives: Data

Infographic: The 50 Biggest Data Breaches From 2004–2021

See large image . . . . . .

Source : Visual Capitalist

Why Freedom of Numbers Matters

What do you do if you’re in charge of a country and things are not going your way?

Last week, President Erdogan of Turkey was presented with the unwelcome news that the inflation rate in his country was close to 50%, with the Turkish lira in free fall. Rather than questioning his own policy, or blaming the pandemic/opposition/weather (as so many other politicians have done before), he opted to fire the chief statistician.

He’s not the first. The Government of Greece prosecuted its chief statistician in 2013 over a dispute about economic statistics. The chief statistician of Fiji was marched out of his office by security guards a few months ago in a row over poverty data.

Statisticians may make unlikely heroes, but these are the people who stand up for the truth against governments who are prepared to go to extreme lengths to say it isn’t so. They, and their allies in the global movement for open data, are just as critical to democracy as the journalists standing up against limits to free speech. Freedom of expression applies to numbers as well as words.

Politicians must govern the world as it is, not as they would like it to be. As the Vice President of Ghana, and champion of good data, Mahamudu Bawumia put it, ‘Statistics deliver both good and bad news, but effective governments need to hear both.’ Good data, in other words, leads to good policy — and bad data puts governments at the mercy of unseen and unknown events.

Suppressing or fixing the data to make the economy look better can make it perform worse. A World Bank paper found that producing and releasing regular, credible, statistics, can have a bigger positive impact on economic growth than opening trade or investing in education. The authors compared countries with stronger and weaker data systems, and found that those with stronger systems tended to grow faster, even after controlling for other variables.

Their advice to autocrats considering concealing the truth from citizens? ‘Data opacity can be used as a tool to keep citizens in the dark, but it might come at the cost of foregoing opportunities to increase the economic pie’.

COVID-19 has raised the political temperature around data even further. In the USA, the Republican Governor of Florida has been accused of falsifying data to undercount the number of cases and deaths. Not all state officials have been prepared to go along with the deception: whistleblowers have spoken out after being asked to report fake numbers, and a data analyst in Florida found armed police at her door in the middle of the night in December 2020 after setting up her own COVID-19 dashboard to report the truth. At over 26,000 cases per 100,000 people, Florida has experienced more COVID cases than the US average of 22,773 per 100,000.

Others have gone even further. In May 2020, then-President Magufuli ordered the authorities in Tanzania to stop reporting on COVID-19 cases altogether. With no data to prove otherwise, he declared the pandemic over in June of that year. He died of what opposition politicians claimed was COVID-19 in March 2021.

Where does it end? Independent statistics are a tool for accountability, but those that are doctored become an instrument of propaganda. In George Orwell’s 1984, the hero Winston Smith is employed at the Ministry of Truth to retrofit statistics to suit whatever Big Brother defines as reality on any given day. Although, “statistics were just as much a fantasy in their original version as in their rectified version. A great deal of the time you were expected to make them up out of your head.”

As Orwell put it: “If liberty means anything at all, it means the right to tell people what they do not want to hear.” If we care about democracy we need to take attempts to muzzle statistics just as seriously as attempts to suppress the media or impose any other limits on freedom of speech. Set the statistics free!

Source : Data4SDGs

Philip Agre Predicted the Dark Side of the Internet 30 Years Ago. Why Did No One Listen?

Reed Albergotti wrote . . . . . . . . .

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy.

Charlotte Lee, who studied under Agre as a graduate student at UCLA, and is now a professor of human-centered design and engineering at the University of Washington, said she is still studying his work and learning from it today. She said she wishes he were around to help her understand it even better.

But Agre isn’t available. In 2009, he simply dropped off the face of the earth, abandoning his position at UCLA. When friends reported Agre missing, police located him and confirmed that he was okay, but Agre never returned to the public debate. His closest friends declined to further discuss details of his disappearance, citing respect for Agre’s privacy.

Instead, many of the ideas and conclusions that Agre explored in his academic research and his writing are only recently cropping up at think tanks and nonprofits focused on holding technology companies accountable.

“I’m seeing things Phil wrote about in the ’90s being said today as though they’re new ideas,” said Christine Borgman, a professor of information studies at UCLA who helped recruit Agre for his professorship at the school.

The Washington Post sent a message to Agre’s last known email address. It bounced back. Attempts to contact his sister and other family members were unsuccessful. A dozen former colleagues and friends had no idea where Agre is living today. Some said that, as of a few years ago, he was living somewhere around Los Angeles.

Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use

Agre was a child math prodigy who became a popular blogger and contributor to Wired. Now he has been all but forgotten in mainstream technology circles. But his work is still regularly cited by technology researchers in academia and is considered foundational reading in the field of social informatics, or the study of the effects of computers on society.

Agre earned his doctorate at MIT in 1989, the same year the World Wide Web was invented. At that time, even among Silicon Valley venture capitalists betting on the rise of computers, few people foresaw just how deeply and quickly the computerization of everything would change life, economics or even politics.

A small group of academics, Agre included, observed that computer scientists viewed their work in a vacuum largely disconnected from the world around it. At the same time, people outside that world lacked a deep enough understanding of technology or how it was about to change their lives.

By the early 1990s, Agre came to believe the field of artificial intelligence had gone astray, and that a lack of criticism of the profession was one of the main reasons. In those early days of artificial intelligence, most people in AI were focused on complex math problems aimed at automating human tasks, with limited success. Yet the industry described the code they were writing as “intelligent,” giving it human attributes that didn’t actually exist.

His landmark 1997 paper called “Lessons Learned in Trying to Reform AI” is still largely considered a classic, said Geoffrey Bowker, professor emeritus of informatics at University of California at Irvine. Agre noticed that those building artificial intelligence ignored critiques of the technology from outsiders. But Agre argued criticism should be part of the process of building AI. “The conclusion is quite brilliant and has taken us as a field many years to understand. One foot planted in the craftwork in design and the other foot planted in a critique,” Bowker said.

Nevertheless, AI has barreled ahead unencumbered, weaving itself into even “low tech” industries and affecting the lives of most people who use the Internet. It guides people on what to watch and read on YouTube and Facebook, it determines sentences for convicted criminals, allows companies to automate and eliminate jobs, and allows authoritarian regimes to monitor citizens with greater efficiency and thwart attempts at democracy.

Google CEO Sundar Pichai: Fears about artificial intelligence are ‘very legitimate,’ he says in Post interview

Today’s AI, which has largely abandoned the type of work Agre and others were doing in the ’80s and ’90s, is focused on ingesting massive amounts of data and analyzing it with the world’s most powerful computers. But as the new form of AI has progressed, it has created problems — ranging from discrimination to filter bubbles to the spread of disinformation — and some academics say that is in part because it suffers from the same lack of self-criticism that Agre identified 30 years ago.

In December, Google’s firing of AI research scientist Timnit Gebru after she wrote a paper on the ethical issues facing Google’s AI efforts highlighted the continued tension over the ethics of artificial intelligence and the industry’s aversion to criticism.

“It’s such a homogenous field, and people in that field don’t see that maybe what they’re doing could be criticized,” said Sofian Audry, a professor of computational media at University of Quebec in Montreal who began as an artificial intelligence researcher. “What Agre says is that it is worthwhile and necessary that the people who develop these technologies are critical,” Audrey said.

Agre grew up in Maryland, where he said he was “constructed to be a math prodigy” by a psychologist in the region. He said in his 1997 paper that school integration led to a search for gifted and talented students. Agre later became angry at his parents for sending him off to college early and his relationship with them suffered as a result, according to a friend, who spoke on the condition of anonymity because Agre did not give him permission to speak about his personal life.
Agre wrote that when he entered college, he wasn’t required to learn about much else other than math and “arrived in graduate school at MIT with little genuine knowledge beyond math and computers.” He took a year off graduate school to travel and read, “trying in an indiscriminate way, and on my own resources, to become an educated person,” he wrote.

Agre began to rebel, in a sense, from his profession, seeking out critics of artificial intelligence, studying philosophy and other academic disciplines. At first he found the texts “impenetrable,” he wrote, because he had trained his mind to dissect everything he read as he would a technical paper on math or computer science. “It finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms,” he wrote.

Agre’s blossoming intellectual interest took him away from computer science and transformed him into something unusual at that time: A brilliant mathematician with a deep understanding of the most advanced theories in artificial intelligence, who could also step outside of that realm and look at it critically from the perspective of an outsider.

A Catholic newsletter promised investigative journalism. Then it outed a priest using Grindr data.

For this reason, Agre became a sought-after academic. Several former colleagues told stories about Agre’s insatiable appetite on books from across the academic and popular landscape, piled high in his office or in the library. He became known for his original thinking that was buoyed by his widespread curiosity.

“He was a very enlightening person to think with — someone you would want to have a meal with at every opportunity,” Borgman said.

Agre combined his understanding of the humanities and technology to dissect the impact technology would have on society as it progressed. Today, many of his analyses read like predictions come true.

In a 1994 paper, published a year before the launches of Yahoo, Amazon and eBay, Agre foresaw that computers could facilitate the mass collection of data on everything in society, and that people would overlook the privacy concerns because, rather than “big brother” collecting data to surveil citizens, it would be many different entities collecting the data for lots of purposes, some good and some problematic.

More profoundly, though, Agre wrote in the paper that the mass collection of data would change and simplify human behavior to make it easier to quantify. That has happened on a scale few people could have imagined, as social media and other online networks have corralled human interactions into easily quantifiable metrics, such as being friends or not, liking or not, a follower or someone who is followed. And the data generated by those interactions has been used to further shape behavior, by targeting messages meant to manipulate people psychologically.

In 2001, he wrote that “your face is not a bar code,” arguing against the use of facial recognition in public places. In the article, he predicted that, if the technology continued to develop in the West, it would eventually be adopted elsewhere, allowing, for instance, the Chinese government to track everyone inside its country within 20 years.

Twenty years later, a debate is raging in the United States over the use of facial recognition technology by law enforcement and immigration officials, and some states have begun to ban the technology in public places. Despite outcry, it may be too late to curtail the proliferation of the technology. China, as Agre predicted, has already begun employing it on a mass scale, allowing an unprecedented level of surveillance by the Communist Party.

Agre brought his work into the mainstream with an Internet mailing list called the Red Rock Eater News Service, named after a joke in Bennett Cerf’s “Book of Riddles.” It’s considered an early example of what would eventually become blogs.

Agre was also, at times, deeply frustrated with the limitations of his work, which was so far ahead of its time that it went unheeded until 25 years later. “He felt that people didn’t get what he was saying. He was writing for an audience of the benighted and the benighted were unable to understand what he was saying,” Bowker said.

‘Nothing can stop what’s coming’: Far-right forums that fomented Capitol riots voice glee in aftermath

“He was certainly frustrated that there wasn’t more uptake. But people who are a generation ahead of themselves, they’re always a generation ahead of themselves,” Borgman said.

Agre’s final project was what friends and colleagues colloquially called “The Bible of the Internet,” a definitive book that would dissect the foundations of the Internet from the ground up. But he never finished it.

Agre resurfaces from time to time, according to a former colleague, but has not been seen in years.

“Why do certain kinds of insightful scholars or even people with such an insightful understanding of some field essentially throw their arms in the air and go, I’m done with this?” asked Simon Penny, a professor of fine arts at University of California at Irvine who has studied Agre’s work extensively. “Psychologically people have these breaks. It’s a big question. Who goes on and why? Who continues to be engaged in some sort of battle, some sort of intellectual project and at what point do they go, I’m done? Or say, ‘This is not relevant to me anymore and I’ve see the error of my ways.’”

Several years ago, former colleagues at UCLA attempted to put together a collection of his work, but Agre resurfaced, telling them to stop.

Agre’s life’s work was left uncompleted, questions posed but unanswered. John Seberger, a postdoctoral fellow in the Department of Informatics at Indiana University who has studied Agre’s work extensively, said that’s not necessarily a bad thing.

Seberger said Agre’s work offers a way of thinking about the problems that face an increasingly digital society. But today, more than a decade after Agre’s disappearance, the problems are more clearly understood and there are more people studying them.

“Especially right now when we are dealing with profound social unrest, the possibility to involve more diverse groups of scholars in answering these questions that he left unanswered can only benefit us,” he said.

Source : The Washington Post

Tim Cook on Why It’s Time to Fight the “Data-Industrial Complex”

Zach Baron wrote . . . . . . . . .

It has become relatively common for tech observers and even regular everyday citizens to warn about the insidious threats poised to society by technology companies run amuck. But those warnings rarely come from the top of the industry, and even more rarely do they come from someone as powerful and influential as Apple CEO Tim Cook. But on Thursday morning, Cook joined the CPDP Computers, Privacy and Data Protection conference in Brussels to give a surprisingly blunt and direct speech decrying the emergence of what he called “the data-industrial complex.”

In what was widely interpreted to be a reference to Facebook and other Apple competitors (none of which he named), Cook described a vast and opaque industry that has arisen around the capture of massive amounts of personal data, often without the knowledge of users, which is then aggregated and monetized and —at times — used for nefarious ends. This practice, Cook said, “degrades our fundamental right to privacy first, and our social fabric by consequence,” and contributes to an ecosystem full of “rampant disinformation and conspiracy theories juiced by algorithms.” It is a world, as he put it, referencing a now nearly-ubiquitous idea in tech criticism, in which you are no longer the customer, but the product.

Cook also highlighted two new Apple features. The first is what the company is calling a “privacy nutrition label” — a section on App Store product pages that explains every app’s privacy practices, including what they do with your data. The second, already more controversial, is App Tracking Transparency, a feature that will require apps to get permission before tracking your data, and which will become mandatory in the very near future. ATT, as Apple calls it, has been hailed by privacy advocates around the world as a welcome step in the effort to shore up individual rights against a massive and sometimes unscrupulous tech industry; it has also been harshly criticized by some of Apple’s competitors, like Facebook, which continues to rely on some degree of tracking to target the advertising it sells. In a December full page ad in the New York Times, the Washington Post, and elsewhere, Facebook alleged that “these changes will be devastating to small businesses” who depend on tracking-based advertising to build their brands and sell their products. (Needless to say, Apple disagrees.)

At the center of all this is Cook, who gave his speech in a suit and tie, but had already replaced them with a cozy looking vest when we spoke, less than an hour later. Over video chat, Cook is mild-mannered and earnest; he began and ended our conversation by throwing up the peace sign, in greeting and farewell. But he’d also begun the day by castigating, in harsh and unsparing language, unnamed “purveyors of fake news and peddlers of division” and lamenting the loss of “the freedom to be human” — not exactly stock or safe CEO speechifying. “A social dilemma,” he said, “cannot be allowed to become a social catastrophe.” During our conversation, we talked about the principles and trade-offs behind Apple’s push for greater privacy for its users, and just how dire the situation is around data harvesting and its effect on our social fabric; we also talked about whether Tim Cook is as addicted to his iPhone as the rest of us are to ours.

GQ: I thought we could start with your speech this morning: I was struck by the vehemence of it. You talked about “Rampant disinformation and conspiracy theories juiced by algorithms.” You talked about the degradation of our very social fabric. This isn’t normally the kind of thing you hear from a CEO of a major company. I’m curious what brought you to this point, to speak with that kind of urgency?

Tim Cook: Yeah, I’ve probably never been a normal CEO. That’s probably good to point out from the word “go.” I feel that way, Zach. I feel very much that we’re in a situation today where the internet’s become too dark a place. It can be so empowering. And yet what has happened is the tracking sort of without our consent — I don’t mind tracking with consent — but I think too many people are just tracking [without our consent], and people either are not aware of it, or they’re not aware of the extent of it. And so what we’re trying to do is sort of bring it back to people, and give people the power, give people the choice. Because you can see what happens when that’s not the case.

I’m curious about exactly that: what happens when we don’t get to choose what happens to our data. Obviously Apple talks a lot about privacy as a value, but what exactly do we have to be worried about? Is there an example that you could give of the kind of alarming stuff you are seeing as a result of what you call the data industrial complex?

Here’s one that I think is not well understood, but that your reader might be interested in: Think about for a moment, if you all of a sudden find out that you’re being surveilled every moment of the day. Your online life is being surveilled. And if somebody develops a 360 degree view of that, what is going to happen to your behavior over time? You’re going to restrict it. You’re going to begin thinking, Well, I don’t really want somebody to know that I’m exploring that, or looking at that, or investigating that. And you’re going to restrict and restrict and restrict. And who wants to be in that world where we’re self-censoring ourselves in such a mass way across society that you wind up with people that are thinking less, feeling less, doing less? This is not an environment any of us wants to be a part of. And I worry that that’s where we’re currently headed.

This issue, privacy, has at times seemed very personal to you — is that in fact the case?

Well, it’s personal for Apple in that we’ve been focused on it from the start of the company. We could see that this digital footprint piece could be abused in a way. We weren’t sure exactly how, but we knew that it would not be good. And unfortunately, that has played out in so many areas. You don’t have to look too far to find some recent examples, both in things that are in the news and not in the news. And so, this is not about Tim. This is about Apple and Apple really giving the user, empowering the user to make a choice. Apple’s always been about democratizing things, you know, democratizing technology — you used to have to be a gazillionaire to make a movie. Now you can make a movie on your iPhone. We love that. We love democratizing things like this, and we love democratizing the data down to the individual, where the individual is deciding whether they choose to share it or not.

I wonder if you could make concrete something you just alluded to, in terms of examples in the news of bad consequences stemming from the abuse of digital data. When you say that, is there something specific that you have in mind?

Well, you know, the tools that are being used to develop a 360 degree view of people’s lives are the same tools that are used to target them to form extremist groups, or to organize to ransack the Capitol, or whatever the situation may be. And these are not separate kinds of things. They’re the same thing. They’re manipulating people’s behavior.

There’s been a deep focus on how other tech companies, which provide their services for free, become places where the user is the product, right? That’s a phrase you’ve used before. And they use all this information to build algorithms that addict people, or radicalize them, as you allude to. How much of these new efforts on Apple’s part are an attempt to differentiate you guys from other companies who share the tech space with you?

It’s not about differentiation, Zach, it’s about the user. We are very user-focused. The whole company revolves around the user experience. And one of the key elements of the user experience these days is about privacy. And so, if you were to go back and sort of graph the last 10 years [of Apple], or more than that really, you would see a continual level of tools that are being shipped to help people protect their privacy. A few years ago, we shipped intelligent tracking prevention on Safari. This year, we’re talking about nutrition labels, our privacy nutrition labels and ATT. But each year we’re doing something. And so, just like we have a long term roadmap for iPhones and iPads, et cetera, we try to make a contribution on privacy every year as well.

You mentioned the user, and as a user myself, I think the iPhone is a magical device. I live through it daily. But also increasingly, in part because of some of the stuff that you’re referencing — occasionally it’s like holding a Medusa in my hand. There are all these apps that want to addict me with algorithms, or track me, or whatever. I wonder: do you ever feel that way? And do you ever worry that as a result of some of the stuff you’re talking about, people are developing those kinds of ambivalent feelings around your products, even though it’s not your apps that are doing it?

Yeah. I hope people can see sort of what we’re bringing to the table there. When we learned that people were worried about the amount of time they were spending on their devices, we created Screen Time. When parents were worried about their kids, about what they were looking at, we invented or created a lot of parental controls. And so we’re always trying to be either one step ahead, or, if we see something develop in a bad way, we’re trying to quickly figure out what we can do to contribute to the fix of it. We can’t fix everything. But we can do a lot. We can contribute in a meaningful way. And that’s what we’re trying to do.

I know you’re talking to me as the CEO of Apple, but I’m just so curious: Do you ever feel that way about your own phone? Like: Man, there’s a lot going on here, and I have a complicated relationship to it.

Sometimes I feel like I get interrupted too much. And so I’ve used Screen Time in a way for me to sort of strip out a lot of the notifications that I was getting. I asked myself, “Do I really need to know this in the moment, every moment?” And I wound up plummeting the number of notifications that I was getting. It’s a great tool. I don’t know if you’re using it or not, but I would encourage you to use it. Because what I found was that I thought I was getting a lot less than I was getting. And all of a sudden you see what the facts are and you go, “Well, why do I need all of this?”

You’re surely familiar with the critique of some of the stuff that you guys are doing now, which is that the sites and companies that provide a free internet to the rest of us do so basically by selling ads, and that these changes by Apple are going to threaten the business of all kinds of ad-supported sites and publishers. What’s Apple’s response to that?

Well, I think ads are great. And ads have existed a long time, and they’ve existed in times without this sort of invasive targeting. But we’re not trying to get anybody to change their business model. We have no objective to do that. All we’re trying to do is give the individual the right to say, “I want to be tracked,” or “I don’t.” That’s all we’re trying to do. And then with the privacy nutrition label, we’re just trying to give the user more facts so they can make an educated decision of whether they want to download this app or not. It’s not aimed at anyone. It’s about giving the user more power.

A few weeks ago, when you guys suspended Parler from the App Store, you went on Fox News and defended the decision. But the Fox News point to you was basically: “Well, why should you get to decide?” I imagine people will ask a similar question here. Why do you get to decide what does or doesn’t happen to people’s data?

Well, we don’t. I don’t want to decide, to be clear. Because I think that you and I may make a different choice. And what we want to do is supply people the tools so that they can make the decision themselves. That’s opposed to the environment right now, where companies are deciding. A company should not decide about whether they’re going to vacuum up data or not. It should be a conscious decision by you or I, whether our data, and what data, can be vacuumed and how it can be used.

Speaking about making the decision yourself… I live in California, where Apple has partnered with the state to create CA Notify, which is a COVID-19 exposure notification app. And as you were talking, I was thinking that I still haven’t enabled it, precisely because I had some concerns about being tracked. (The app, as Cook is about to point out, does not in fact track you.) How do you guys think about solving problems where one imperative — incredibly important work related to contact tracing — runs, or at least seems to run, into a second imperative, which is privacy?

Well, as it turns out, they’re not mutually exclusive because that is the most privacy-centric implementation that you could imagine. And we’ve sort of designed the architecture in that kind of way. And so it’s only the individual that decides, just like it is with ATT. It’s the individual that decides whether to share data or not. And so I can commit to you that that’s a very privacy-centric app. And I would encourage you to use it. We’ve worked with many, many states on it now, and we’re hoping that it can actually become a national kind of program, instead of a state by state kind of thing.

I’ve seen you say a couple of times that the current state of things regarding data collection and user privacy is unsustainable. I want to make sure I understand what you mean when you say that: What exactly is unsustainable about the current way things are done?

I think it gets back to the user. The current system is not sustainable because the user doesn’t like it. We’re getting a lot of feedback from users that they want the power to make a choice. That it shouldn’t be made for them. And so that’s what we’re supplying, is the tool to do that. I don’t think that the users — and the bulk of the population are users of technology in some way, not just ours, but the sum total of all of the market — I think that you sort of have a user revolt. And the power is with the people. I’ve always believed that. And I still believe that. And so what we’re trying to do is empower them in this way with data.

Obviously a lot of businesses currently run on this super data harvesting-centric model. Anything provided to you for free means that you are effectively the product, and the money comes when access to you and your data are sold to a third party advertiser. What do you think the future is for businesses like that? Do they have a future?

Yeah, I think so. I don’t think people have to change the business model, the business model being that ads support a product. I think that’s very achievable. It’s just that what I think is probably not going to be achievable is doing that without users agreeing to do it, because I think at some point governments are going to step in and regulate that. I mean, just think about it: Would we accept that in any other part of our life? No. It would be unacceptable. And so that’s the sort of the way I look at it. And I’m not sure the exact road. But I’m really confident and really optimistic about where we will head with this. I think we’ll give power back to the user. I think that the ad business can be very robust without these kinds of invasive, 360 degree profiles. And I think some people will say: “You know, I’m okay with giving my 360 degree profile because I like the targeted [ads].” And so people will be in the driver’s seat. Some will do it. Some will not. The ones that will not will have a different kind of ad experience, but still an ad experience that’s robust.

We’re discussing a decision that is located exclusively on Apple devices and the App Store, but is the hope that this will have a domino effect and change behavior more broadly across tech? And if so, walk me into the utopia we might arrive in, if everything happens the way it should happen in terms of people regaining control of their data.

You know, we try to be the ripple in the pond on this. And so it would be great if other people copied this and said, you know, users should have the right, they should have the control. And it would be great if companies would regulate themselves, so to speak, to get there. I’m not sure whether that’s realistic. But I think governments are going to step in and do it if the companies do not. And I’m not sure what the exact path of that happening will be. It’ll be different in each part of the world probably, but I think it will happen because users will demand it. And that utopia is a great utopia because you’ve lined up with all the great things about the web — the empowerment that you get, the knowledge that you gain, the things you can create — without the drain of the things you don’t want to give, like your data.

Source : GQ

Record Chinese Bilateral Surpluses With the United States Are Not Mirrored in the U.S. Trade Data

Brad W. Setser wrote . . . . . . . . .

There are certain rules of the thumb that you can usually rely on.

The sun rises in the East. And unless you are on equator, its angle in the sky will vary with the seasons.

No rules of thumb for trade data are quite as strongly grounded in the physical world.

But it historically it has always been the case that the U.S. data for imports from China showed more imports than the Chinese data showed exports to the United States.

Always. Predictable. Expected. You could count on it if you wanted to use the Chinese export data to forecast U.S. imports from China.

China’s data on imports from the United States also tends to show more imports than the U.S. data shows exports to China, but since imports are so much larger than exports, it almost always has been the case that the U.S. data shows a larger bilateral deficit with China than the Chinese data shows a surplus with the United States. (The Hong Kong port effect explains most, but not all, of the discrepancy; see this 2018 blog for my views on how to adjust the U.S.-Chinese balance of payments data.)

This is sort of a well-known—the U.S. and Chinese numbers generally do not line up, but they typically have not lined up in a predictable way. *

This rule of thumb now must be tossed out the window.

In the last few months of data, China’s reported exports to the United States have significantly exceeded reported U.S. imports (the exact opposite of the established pattern).

And China’s reported surplus with the United States thus is now larger than the U.S.’ reported deficit with China (again, the opposite of the norm).

This has only been apparent in the last few months of data—it jumps out in the monthly data and the trailing 3m sum but not in the trailing 12m sum. It was arguably present last year, but the shift in the size of the reported Chinese surplus relative to the U.S. deficit only really started to jump out over the summer.

Yet there is no doubt there is a gap.

In July 2018, China said it exported $41.6 billion to the United States, and the United States reported importing $47 billion from China. In July 2019, China said it exported $38.9 billion to the United States (down because of the tariffs), and the United States reported importing $41.4 billion from China. And in July 2020, China said it exported $43.7 billion to the United States, while the United States only reported importing $40.7 billion from China.

As a result, the answer to a lot of politically-salient questions—for example, is the bilateral trade deficit with China larger or smaller now than in 2016?—hinges on whether you use the U.S. or the Chinese data. **

If you look at the Chinese data, its current monthly surplus with the United States is at an all-time high for the months of July and August, topping its pre-trade war peaks by substantial margins.

In the U.S. data, the July deficit with China and Hong Kong (adding in Hong Kong reduces the size of the deficit as the United States runs a surplus with HK) is only just above its 2016 levels.

These monthly differences produce a different trajectory in the year-to-date numbers.

If you just look at China’s data, its surplus with the United States looks poised to set an all-time high in 2020, as strong exports over the summer have made up for the obviously weak start of the year.

In the U.S. data, by contrast, the more muted recent deficits have not pushed the deficit toward all-time highs.

As I noted, the gap between China’s reported exports to the United States and reported U.S. imports (plus the larger deficit when reported from the U.S. side than the surplus on the Chinese side) is a long-standing pattern. It reflects Hong Kong’s role in U.S.-China trade—a lot of what China records in its data as an export to Hong Kong historically has ended up in the U.S. data as an import from China, and a lot of what the United States reports as an export to Hong Kong has historically ended up in the Chinese data as an import from the United States.

The signal here comes from the change in the pattern—a long established and well-understood discrepancy between the import and export side data has gone away.

The puzzle now is why the sign on the discrepancy looks to be flipping

There are a range of possible explanations.

Chinese exporters might be overstating their exports, in general and to the United States. Overstating exports is a classic way of getting capital into a country with capital controls.

But the simplest and most parsimonious explanation is that the U.S. tariffs have created a strong incentive for firms importing into the United States to go to some lengths to understate their imports from China. Thus, I would bet that U.S. imports from China are now slightly under-counted (which by implication holds the bilateral trade deficit down).

No doubt more investigation will yield evidence that points toward a conclusive answer.

Fact-checking can seem like a dull exercise. Mapping one country’s import data to a partner’s export data even more so, especially in a world where looking at bilateral trade balances is viewed as a bit retro, global value chain and all. But sometimes it yields interesting results. A similar exercise back in 2015—the Chinese current account surplus stopped tracking the goods balance—led me to look at whether the reported increase in tourism imports in the Chinese data was matched by a rise in the number of actual tourists (it wasn’t) and ultimately produced quite a good Fed paper. My guess is that the changing sign of the U.S. imports v. Chinese exports discrepancy will generate another good paper.

Source : Council on Foreign Relations

U.S. Economic Hard Data vs Soft Data

Hard data are painting a different picture of the current economy