South American military dictatorships combined forces in the late 1970s on a continent-wide crackdown they called Operation Condor against perceived threats to their rule. It was part of a broader wave of violence in which nuns and priests were imprisoned, dissidents were tossed out of airplanes and thousands of victims were “disappeared.”
To coordinate this brutal campaign, Argentina, Chile and other countries established a secret communications network using encryption machines from a Swiss company called Crypto AG.
Crypto was secretly owned by the CIA as part of a decades-long operation with West German intelligence. The U.S. spy agency was, in effect, supplying rigged communications gear to some of South America’s most brutal regimes and, as a result, in unique position to know the extent of their atrocities.
Whether there were opportunities to act, and failures to do so, are among the difficult questions raised by the revelations about the CIA’s involvement in Crypto — dubbed Operation Rubicon by the agency. The program enabled U.S. spy agencies to monitor the communications of dozens of countries in Europe, Asia, Africa and Latin America over half a century.
Two brief thoughts: one, why didn’t the US government do more to reign in these abuses? Because they were being conducted by right-wing governments?
and two, no wonder China wants to emulate this program with the installation of Huawei networking gear, and also no wonder the US is opposed.
No NATO ally should succumb to the temptation of letting Chinese tech giant Huawei into their next-generation cellular networks, U.S. House Speaker Nancy Pelosi said Monday at Allied headquarters, turning U.S. opposition to Huawei into a bipartisan effort.
Pelosi said the invasion of privacy that would result from having Huawei integrated into Europe’s 5G communication networks would be “like having the state police, the Chinese state police, right in your pocket.”
She insisted such technology was far too sensitive to turn to over to Chinese interests, even though they can deliver such technology cheaper, thanks to the fact that the company relied on Western know-how to build its systems.
“While some people say that its cheaper to do Huawei — well yeah — it’s a People’s Liberation Army initiative using reversed engineering from Western technology,” Pelosi, the senior Democratic lawmaker, told reporters in Brussels.
“So, of course it’s going to be cheaper to put on the market. And if it’s cheaper, then they get the market share and then they (China) bring in their autocracy of lack of privacy.”
So Tim Cook called for better privacy regulation in the US. Maybe he reads this humble blog.1
In 2019, it’s time to stand up for the right to privacy—yours, mine, all of ours. Consumers shouldn’t have to tolerate another year of companies irresponsibly amassing huge user profiles, data breaches that seem out of control and the vanishing ability to control our own digital lives.
This problem is solvable—it isn’t too big, too challenging or too late. Innovation, breakthrough ideas and great features can go hand in hand with user privacy—and they must. Realizing technology’s potential depends on it.
That’s why I and others are calling on the U.S. Congress to pass comprehensive federal privacy legislation—a landmark package of reforms that protect and empower the consumer. Last year, before a global body of privacy regulators, I laid out four principles that I believe should guide legislation:
Acxiom, like Mr. Cook, also supports a national privacy law for the U.S., such as GDPR provides for the European Union. Acxiom is actively participating in discussions with U.S. lawmakers as well as industry trade groups to help ensure U.S. consumers receive the kind of transparency, access, and control Acxiom has been providing voluntarily for years,” the company said. “We believe it would be universally beneficial if we were able to work with Apple and other industry leaders to define the best set of laws that maintain the benefits of data in our economy while giving the necessary protections and rights to all people.”
In its statement, Acxiom said it is working with lawmakers to build a “singular, united set of policies across the U.S.” What it does not want, according to the statement, are “multiple and independent state laws” making it onerous to comply.
Of course, it behooves Acxiom to seem amenable to such legislative moves. It’s becoming increasingly clear that the tide is shifting in the U.S., and more people want better safeguards over their data. Cook called for not just stricter data regulations, but a federally controlled data broker database that would make it possible for citizens to know exactly what information the companies have on them and which companies transacted with these data firms. While Acxiom is saying it’s open to new regulation, it’s unclear what exactly the firm will agree to.
For the past year, select Google advertisers have had access to a potent new tool to track whether the ads they ran online led to a sale at a physical store in the U.S. That insight came thanks in part to a stockpile of Mastercard transactions that Google paid for.
But most of the two billion Mastercard holders aren’t aware of this behind-the-scenes tracking. That’s because the companies never told the public about the arrangement.
Alphabet Inc.’s Google and Mastercard Inc. brokered a business partnership during about four years of negotiations, according to four people with knowledge of the deal, three of whom worked on it directly. The alliance gave Google an unprecedented asset for measuring retail spending, part of the search giant’s strategy to fortify its primary business against onslaughts from Amazon.com Inc. and others.
Nick Heer writes about a topic near and dear to our brains, albeit from the web developer side: why do websites load so slowly? And why is our personal data being sold without our informed consent?
The average internet connection in the United States is about six times as fast as it was just ten years ago, but instead of making it faster to browse the same types of websites, we’re simply occupying that extra bandwidth with more stuff. Some of this stuff is amazing: in 2006, Apple added movies to the iTunes Store that were 640 × 480 pixels, but you can now stream movies in HD resolution and (pretend) 4K. These much higher speeds also allow us to see more detailed photos, and that’s very nice.
But a lot of the stuff we’re seeing is a pile-up of garbage on seemingly every major website that does nothing to make visitors happier — if anything, much of this stuff is deeply irritating and morally indefensible.
Take that CNN article, for example. Here’s what it contained when I loaded it:
Eleven web fonts, totalling 414 KB
Four stylesheets, totalling 315 KB
Twenty-nine XML HTTP requests, totalling about 500 KB
Approximately one hundred scripts, totalling several megabytes — though it’s hard to pin down the number and actual size because some of the scripts are “beacons” that load after the page is technically finished downloading.
The vast majority of these resources are not directly related to the information on the page, and I’m including advertising. Many of the scripts that were loaded are purely for surveillance purposes: self-hosted analytics, of which there are several examples; various third-party analytics firms like Salesforce, Chartbeat, and Optimizely; and social network sharing widgets. They churn through CPU cycles and cause my six-year-old computer to cry out in pain and fury. I’m not asking much of it; I have opened a text-based document on the web.
An actual solution recognizes that this bullshit is inexcusable. It is making the web a cumulatively awful place to be. Behind closed doors, those in the advertising and marketing industry can be pretty lucid about how much they also hate surveillance scripts and how awful they find these methods, while simultaneously encouraging their use. Meanwhile, users are increasingly taking matters into their own hands — the use of ad blockers is rising across the board, many of which also block tracking scripts and other disrespectful behaviours. Users are making that choice.
They shouldn’t have to. Better choices should be made by web developers to not ship this bullshit in the first place. We wouldn’t tolerate such intrusive behaviour more generally; why are we expected to find it acceptable on the web?
An honest web is one in which the overwhelming majority of the code and assets downloaded to a user’s computer are used in a page’s visual presentation, with nearly all the remainder used to define the semantic structure and associated metadata on the page. Bullshit — in the form of CPU-sucking surveillance, unnecessarily-interruptive elements, and behaviours that nobody responsible for a website would themselves find appealing as a visitor — is unwelcome and intolerable.
All that “surveillance” stuff and related files are an abomination, and pleases no-one. I’ve heard anecdotal reports that even marketing savvy companies don’t frequently use all the data that is collected on their behalf. So who wants it? Unclear to me. I guess the third party data collection industry is happy to vacuum up this data because they can subsequently re-sell our information to the highest bidder, but that’s not a good enough reason to continue making web pages cumbersome.
Surveillance Society – Halsted and Division Edition
The Guardian reports:
Facebook used its apps to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones, a court case in California alleges.
The claims of what would amount to mass surveillance are part of a lawsuit brought against the company by the former startup Six4Three, listed in legal documents filed at the superior court in San Mateo as part of a court case that has been ongoing for more than two years.
A Facebook spokesperson said that Six4Three’s “claims have no merit, and we will continue to defend ourselves vigorously”.
The allegations about surveillance appear in a January filing, the fifth amended complaint made by Six4Three. It alleges that Facebook used a range of methods, some adapted to the different phones that users carried, to collect information it could use for commercial purposes.
“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls,” one court document says.
This is Facebook’s business model though, so what exactly are they going to argue? No, we don’t collect data on our users and then use this information to sell advertising to corporations?
The one detail that is the most disturbing1 is that Facebook did this for people who weren’t Facebook users. How did these people consent? How do they request their data? How do they update their privacy settings?
Cell phones are useful for a lot of things, but owning one does have consequences, like the ability for 3rd party organizations or government entities to track your location down to 25-50 feet at any time your phone is connected to a cell tower.
The NYT reports:
Senator Ron Wyden, Democrat of Oregon, wrote in a letter this week to the Federal Communications Commission that Securus confirmed that it did not “conduct any review of surveillance requests.” The senator said relying on customers to provide documentation was inadequate. “Wireless carriers have an obligation to take affirmative steps to verify law enforcement requests,” he wrote, adding that Securus did not follow those procedures.
The service provided by Securus reveals a potential weakness in a system that is supposed to protect the private information of millions of cellphone users. With customers’ consent, carriers sell the ability to acquire location data for marketing purposes like providing coupons when someone is near a business, or services like roadside assistance or bank fraud protection. Companies that use the data generally sign contracts pledging to get people’s approval — through a response to a text message, for example, or the push of a button on a menu — or to otherwise use the data legally.
But the contracts between the companies, including Securus, are “the legal equivalent of a pinky promise,” Mr. Wyden wrote. The F.C.C. said it was reviewing the letter.
Courts are split on whether investigators need a warrant based on probable cause to acquire location data. In some states, a warrant is required for any sort of cellphone tracking. In other states, it is needed only if an investigator wants the data in real time. And in others no warrant is needed at all.
Other experts said the law should apply for any communications on a network, not just phone calls. “If the phone companies are giving someone a direct portal into the real-time location data on all of their customers, they should be policing it,” said Laura Moy, the deputy director of the Georgetown Law Center on Privacy & Technology.
Mr. Wyden, in his letter to the F.C.C., also said that carriers had an obligation to verify whether law enforcement requests were legal. But Securus cuts the carriers out of the review process, because the carriers do not receive the legal documents.
The letter called for an F.C.C. investigation into Securus, as well as the phone companies and their protections of user data. Mr. Wyden also sent letters to the major carriers, seeking audits of their relationships with companies that buy consumer data. Representatives for AT&T, Sprint, T-Mobile and Verizon said the companies had received the letters and were investigating.
In this particular instance, the 3rd parties selling your location data is called 3Cinteractive and LocationSmart, but there are hundreds more such companies who have built their businesses on turning your location into sellable data, most of which are relatively obscure.
Securus received the data from a mobile marketing company called 3Cinteractive, according to 2013 documents from the Florida Department of Corrections. Securus said that for confidentiality reasons it could not confirm whether that deal was still in place, but a spokesman for Mr. Wyden said the company told the senator’s office it was. In turn, 3Cinteractive got its data from LocationSmart, a firm known as a location aggregator, according to documents from those companies. LocationSmart buys access to the data from all the major American carriers, it says.
How does it work?
“Envision a cell site,” says Allen (a typical tower appears in the photo above). “They’re triangular, and each side has about 120 degrees of sweep.” Every time a signal is transmitted to a nearby phone, says Allen, there is a round-trip delay to the mobile device and back. By using all three sides of the triangle to “talk” to the mobile device, the tower can triangulate which edge of the base station is closest to the device. “Typically the accuracy return varies,” says Allen. “In urban settings, it can be accurate down to several blocks; in suburban settings, several hundred meters.”
“We can locate any subscriber,” says Allen, “and companies want all those subscribers to be addressable,” or discoverable. Normally, this requires passing through some privacy gateways, says Allen. “The end user must opt in through a Web portal or SMS, or an app like Foursquare,” he says, per “universal” CTIA and MMA guidelines, and carriers’ own privacy protocol.
But with enterprise services, there’s a catch. “In a workplace scenario, the corporate entity has the right to opt-in those devices,” says Allen. “The [employee] is typically notified, but the opt-in is up to the employer.”
In other words: if your employer owns your phone, tablet or 3G-enabled computer, they’re entitled to own your location, too.
Even Apple, a corporation that prides itself on not selling users data as much as their competitors, has acknowledged that users data has sometimes been sold.
9To5 Mac reports:
Over the last few days, Apple has seemingly started cracking down on applications that share location data with third-parties. In such cases, Apple has been removing the application in question and informing developers that their app violates two parts of the App Store Review Guidelines…
Sylvania HomeKit Light Strip Thus far, we’ve seen several cases of Apple cracking down on these types of applications. The company informs developers via email that “upon re-evaluation,” their application is in violation of sections 5.1.1 and 5.1.2 of the App Store Review Guidelines, which pertain to transmitting user location data and user awareness of data collection.
Legal – 5.1.1 and Legal 5.1.2
The app transmits user location data to third parties without explicit consent from the user and for unapproved purposes.
Apple explains that developers must remove any code, frameworks, or SDKs that relate to the violation before their app can be resubmitted to the App Store
Ashley Parker, Carol D. Leonnig, Josh Dawsey and Tom Hamburger of the Washington Post report:
President Trump’s personal attorney Michael D. Cohen sometimes taped conversations with associates, according to three people familiar with his practice, and allies of the president are worried that the recordings were seized by federal investigators in a raid of Cohen’s office and residences this week.
Cohen, who served for a decade as a lawyer at the Trump Organization and is a close confidant of Trump, was known to store the conversations using digital files and then replay them for colleagues, according to people who have interacted with him.
“We heard he had some proclivity to make tapes,” said one Trump adviser, who spoke on the condition of anonymity because of the ongoing investigation. “Now we are wondering, who did he tape? Did he store those someplace where they were actually seized? . . . Did they find his recordings?”
Especially funny is that Michael Cohen2 made tapes because “Spanky” Trump so often bragged about how he taped conversations, despite the fact that Trump never actually took the time to create a system to record conversations.
You Wanted To Disappear
Tim O’Brien, a Trump biographer and executive editor of Bloomberg View, wrote a column in the wake of Trump’s taping claim saying that Comey likely had little reason to worry. In the piece, O’Brien recounted that Trump frequently made a similar boast to him.
“Back in the early 2000s, Trump used to tell me all the time that he was recording me when I covered him as reporter for the New York Times,” O’Brien wrote. “He also said the same thing when I was writing a biography of him, ‘Trump Nation.’ I never thought he was, but who could be sure?”
But after Trump sued him for libel shortly after his biography came out, O’Brien’s lawyers deposed Trump in December 2007 — during which Trump admitted he had not, in fact, clandestinely taped O’Brien.
“I’m not equipped to tape-record,” Trump said in the deposition. “I may have said it once or twice to him just to — on the telephone, because everything I said to him he’d write incorrectly; so just to try and keep it honest.”
I’d say the odds are greater than 50/50 that Trump was recorded by Cohen saying something of interest to federal prosecutors, and that the Feds have a copy of this recording or recordings, and that Trump is stress-peeing on a rug in the Oval Office right now.
said everyone at the same time, except for Trump and his thugs [↩]
The big news over the weekend was how Facebook, Trump and Cambridge Analytica worked together to weaponize people’s personal information against them to help Trump win the 2016 election, perhaps with the assistance of Russia. The truth is this harvesting and manipulation of data is Facebook’s model, and anyone who uses Facebook is participating. Facebook is “free”, how exactly do you think they make their billions?
American and British lawmakers demanded on Sunday that Facebook explain how a political data firm with links to President Trump’s 2016 campaign was able to harvest private information from more than 50 million Facebook profiles without the social network’s alerting users. The backlash forced Facebook to once again defend the way it protects user data.
Senator Amy Klobuchar of Minnesota, a Democratic member of the Senate Judiciary Committee, went so far as to press for Mark Zuckerberg, Facebook’s chief executive, to appear before the panel to explain what the social network knew about the misuse of its data “to target political advertising and manipulate voters.”
The calls for greater scrutiny followed reports on Saturday in The New York Times and The Observer of London that Cambridge Analytica, a political data firm founded by Stephen K. Bannon and Robert Mercer, the wealthy Republican donor, had used the Facebook data to develop methods that it claimed could identify the personalities of individual American voters and influence their behavior. The firm’s so-called psychographic modeling underpinned its work for the Trump campaign in 2016, though many have questioned the effectiveness of its techniques.
But Facebook did not inform users whose data had been harvested. The lack of disclosure could violate laws in Britain and in many American states.
Dr Kogan – who later changed his name to Dr Spectre, but has subsequently changed it back to Dr Kogan – is still a faculty member at Cambridge University, a senior research associate. But what his fellow academics didn’t know until Kogan revealed it in emails to the Observer (although Cambridge University says that Kogan told the head of the psychology department), is that he is also an associate professor at St Petersburg University. Further research revealed that he’s received grants from the Russian government to research “Stress, health and psychological wellbeing in social networks”. The opportunity came about on a trip to the city to visit friends and family, he said.
There are other dramatic documents in Wylie’s stash, including a pitch made by Cambridge Analytica to Lukoil, Russia’s second biggest oil producer. In an email dated 17 July 2014, about the US presidential primaries, Nix wrote to Wylie: “We have been asked to write a memo to Lukoil (the Russian oil and gas company) to explain to them how our services are going to apply to the petroleum business. Nix said that “they understand behavioural microtargeting in the context of elections” but that they were “failing to make the connection between voters and their consumers”. The work, he said, would be “shared with the CEO of the business”, a former Soviet oil minister and associate of Putin, Vagit Alekperov.
“It didn’t make any sense to me,” says Wylie. “I didn’t understand either the email or the pitch presentation we did. Why would a Russian oil company want to target information on American voters?”
Lukoil is a private company, but its CEO, Alekperov, answers to Putin, and it’s been used as a vehicle of Russian influence in Europe and elsewhere – including in the Czech Republic, where in 2016 it was revealed that an adviser to the strongly pro-Russian Czech president was being paid by the company.
When I asked Bill Browder – an Anglo-American businessman who is leading a global campaign for a Magnitsky Act to enforce sanctions against Russian individuals – what he made of it, he said: “Everyone in Russia is subordinate to Putin. One should be highly suspicious of any Russian company pitching anything outside its normal business activities.”
The attention led to Facebook suspending Mr. Wylie’s Facebook and Instagram accounts…
In the latest turn of the developing scandal around how Facebook’s user data wound up in the hands of Cambridge Analytica — for use in the in development in psychographic profiles that may or may not have played a part in the election victory of Donald Trump — the company has taken the unusual step of suspending the account of the whistleblower who helped expose the issues.
Academic researchers began publishing warnings that third-party Facebook apps represented a major possible source of privacy leakage in the early 2010s. Some noted that the privacy risks inherent in sharing data with apps were not at all clear to users. One group termed our new reality “interdependent privacy,” because your Facebook friends, in part, determine your own level of privacy.
For as long as apps have existed, they have asked for a lot of data and people have been prone to give it to them. Back in 2010, Penn State researchers systematically recorded what data the top 1,800 apps on Facebook were asking for. They presented their results in 2011 with the paper “Third-Party Apps on Facebook: Privacy and the Illusion of Control.” The table below shows that 148 apps were asking for permission to access friends’ information.
But The Guardian’s reporting suggests that the company’s efforts to restuff Pandora’s box have been lax. Wylie, the whistleblower, received a letter from Facebook asking him to delete any Facebook data nearly two years after the existence of the data was first reported. “That to me was the most astonishing thing,” Wylie told The Guardian. “They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”
But even if Facebook were maximally aggressive about policing this kind of situation, what’s done is done. It’s not just that the data escaped, but that Cambridge Analytica almost certainly learned everything they could from it. As stated in The Guardian, the contract between GSR and Strategic Communications Laboratories states, specifically, “The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information.”
It’s important to dwell on this. It’s not that this research was supposed to identify every U.S. voter just from this data, but rather to develop a method for sorting people based on Facebook’s profiles. Wylie believes that the data was crucial in building Cambridge Analytica’s models. It certainly seems possible that once the “training set” had been used to learn how to psychologically profile people, this specific data itself was no longer necessary. But the truth is that no one knows if the Kogan data had much use out in the real world of political campaigning. Psychological profiling sounds nefarious, but the way that Kogan and Cambridge Analytica first attempted to do it may well have proven, as the company maintains, “fruitless.”
The way I personally deal with Facebook is by seeding it with incorrect information whenever I can, and by being diligent about deleting Facebook cookies from my browsers. Of course, I’m sure they know way too much about me, but at least some of their information is wrong.
Kinda seems like a big deal, though this sort of information is in all sorts of databases, public and private…
A computer-security company said that a proprietary data set containing personal information on nearly 200 million American voters and their predicted voting behavior was left unprotected online, in a large cache of spreadsheets and other electronic files.
According to security company UpGuard, the information, which was available on a public server accessible by anyone via the internet, was compiled by consulting firm Deep Root Analytics, which helps Republican campaigns choose which voters to target with TV advertising.
The voter records, which are public information, were augmented with proprietary analysis about voter behavior by Deep Root, which tries to predict voters’ policy preferences and how likely they are to choose a particular candidate.
The voter information, portions of which were reviewed by The Wall Street Journal, includes the names and other personally identifying information about 198 million registered voters, which would appear to be nearly all of the estimated registered voters in the U.S., the company found. The information includes dates of birth, mailing addresses and party affiliation, as well as self-reported racial demographics, according to Mr. Vickery, but didn’t include social security numbers or financial information.
Registration information about individual voters is available from state and county election boards to anyone who requests it, though compiling it all in one place would take a significant amount of time and labor, and it wouldn’t contain any predictions about voter behavior.
Hmm, good news, though I expect Governor Rauner to veto it, for reasons…
The state Senate on Thursday approved the groundbreaking Right to Know Act, a measure that would require online companies such as Google, Facebook and Amazon to disclose to consumers what data about them has been collected and shared with third parties.
The bill, sponsored by Sen. Michael Hastings, D-Tinley Park, now heads to the Illinois House after passing on a 31-21 vote.
“I think this is a step forward for Illinois in terms of data privacy,” Hastings said Friday. “It gives people the right to know what information (internet companies are) selling to a third party.”
Illinois is taking center stage in the national debate over internet privacy legislation, which is shifting from the federal to state level. Congress voted in March to undo the Federal Communications Commission’s broadband privacy rules, which were adopted last fall under the Obama administration and set to go into effect this year.
President Donald Trump on April 3 signed the measure that repealed the broadband privacy rules.
The FCC protections would have required internet service providers, such as Comcast, Verizon and AT&T, to disclose what personal information they collect and share and would have required consent from consumers before sharing more sensitive information.
Privacy advocates believe Illinois and other states must step up to fill the void left by the shift in federal policy.
The Right to Know Act would require the operator of a commercial website or online service to make available “certain specified information” that has been disclosed to a third party and to provide an email address or toll-free telephone number for customers to request that information.
Major internet companies have been pushing back against the Illinois initiative, ramping up lobbying efforts as the privacy legislation advanced through the Senate, Hastings said. Online trade associations, including CompTIA, the Internet Association and NetChoice, also met with Hastings to voice opposition to the measure.
Of course the technology companies who have been profiting handsomely by selling our information are opposed to this bill, but that doesn’t mean it isn’t a good idea for consumers. I want, at minimum, to be able to share in the profits, and even better, a way to opt out entirely. Ha. Just for grins, read the text of the IL Senate bill to see what kinds of information being sold.
(a) real name, alias, nickname, and user name.
(b) Address information, including, but not limited to, postal or e-mail.
(c) Telephone number.
(d) Account name.
(e) Social security number or other government-issued identification number, including, but not limited to, social security number, driver’s license number, identification card number, and passport number.
(f) Birthdate or age.
(g) Physical characteristic information, including, but not limited to, height and weight.
(h) Sexual information, including, but not limited to, sexual orientation, sex, gender status, gender identity, and gender expression.
(i) Race or ethnicity.
(j) Religious affiliation or activity.
(k) Political affiliation or activity.
(l) Professional or employment-related information.
(m) Educational information.
(n) Medical information, including, but not limited to, medical conditions or drugs, therapies, mental health, or medical products or equipment used.
(o) Financial information, including, but not limited to, credit, debit, or account numbers, account balances, payment history, or information related to assets, liabilities, or general creditworthiness.
(p) Commercial information, including, but not limited to, records of property, products or services provided, obtained, or considered, or other purchasing or consumer histories or tendencies.
(q) Location information.
(r) Internet or mobile activity information, including, but not limited to, Internet protocol addresses or information concerning the access or use of any Internet or mobile-based site or service.
(s) Content, including text, photographs, audio or video recordings, or other material generated by or provided by the customer.
Are you ok with Acxiom, Experian and other similar corporations collecting, collating, selling and re-selling this information about you? I’m not.
McSherry called that bit of qualifying language “worrisome.”
“Samsung may just be giving itself some wiggle room as the service evolves, but that language could be interpreted pretty broadly,” she said.
Samsung has confirmed that its “smart TV” sets are listening to customers’ every word, and the company is warning customers not to speak about personal information while near the TV sets.
The company revealed that the voice activation feature on its smart TVs will capture all nearby conversations. The TV sets can share the information, including sensitive data, with Samsung as well as third-party services.
Samsung has updated its policy and named the third party in question, Nuance Communications, Inc.
Hmm, sounds familiar. Remember this from a few weeks ago:
Consumers have bought more than 11 million internet-connected Vizio televisions since 2010. But according to a complaint filed by the FTC and the New Jersey Attorney General, consumers didn’t know that while they were watching their TVs, Vizio was watching them. The lawsuit challenges the company’s tracking practices and offers insights into how established consumer protection principles apply to smart technology.
Starting in 2014, Vizio made TVs that automatically tracked what consumers were watching and transmitted that data back to its servers. Vizio even retrofitted older models by installing its tracking software remotely. All of this, the FTC and AG allege, was done without clearly telling consumers or getting their consent.
What did Vizio know about what was going on in the privacy of consumers’ homes? On a second-by-second basis, Vizio collected a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content. What’s more, Vizio identified viewing data from cable or broadband service providers, set-top boxes, streaming devices, DVD players, and over-the-air broadcasts. Add it all up and Vizio captured as many as 100 billion data points each day from millions of TVs.
Vizio then turned that mountain of data into cash by selling consumers’ viewing histories to advertisers and others. And let’s be clear: We’re not talking about summary information about national viewing trends. According to the complaint, Vizio got personal. The company provided consumers’ IP addresses to data aggregators, who then matched the address with an individual consumer or household. Vizio’s contracts with third parties prohibited the re-identification of consumers and households by name, but allowed a host of other personal details – for example, sex, age, income, marital status, household size, education, and home ownership. And Vizio permitted these companies to track and target its consumers across devices.
Plus the whole listening to you every second might not always be in your own best interests:
Upon further investigation, however, police began suspecting foul play: Broken knobs and bottles, as well as blood spots around the tub, suggested there had been a struggle. A few days later, the Arkansas chief medical examiner ruled Collins’s death a homicide — and police obtained a search warrant for Bates’s home.
Inside, detectives discovered a bevy of “smart home” devices, including a Nest thermostat, a Honeywell alarm system, a wireless weather monitoring system and an Amazon Echo. Police seized the Echo and served a warrant to Amazon, noting in the affidavit there was “reason to believe that Amazon.com is in possession of records related to a homicide investigation being conducted by the Bentonville Police Department.”
That warrant threw a wrinkle into what might have been a traditional murder investigation, as first reported by the Information, a news site that covers the technology industry.
While police have long seized computers, cellphones and other electronics to investigate crimes, this case has raised fresh questions about privacy issues regarding devices like the Amazon Echo or the Google Home, voice-activated personal command centers that are constantly “listening.” Namely, is there a difference in the reasonable expectation of privacy one should have when dealing with a device that is “always on” in one’s own home?
The Echo is equipped with seven microphones and responds to a “wake word,” most commonly “Alexa.” When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website.
A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later. For instance, if you asked your Echo, “Alexa, what is the weather right now?” you could later go back to the app to find out exactly what time that question was asked.
The Biometric Information Privacy Act of Illinois is not a law many are familiar with. But if you have ever shared a photo on social media, the little-known statute turns out to be one of the nation’s toughest regulations for how companies like Facebook and Google can use facial recognition technologies to identify you online.
On Thursday, an Illinois state senator, Terry Link, introduced an amendment that would have weakened the law by exempting photo-tagging technologies that are now commonly used on social media. The proposal also had the potential to extinguish several class-action lawsuits against technology companies like Facebook by retroactively removing the right of Illinois citizens to sue companies that might have broken the law in the past.
The amendment was lobbied for by Facebook, according to a person involved in the effort who spoke on the condition of anonymity. And it helps to illustrate how from drone aircraft to genetic information and statutes that govern how companies sell consumer information to data miners, tech companies are in a capital to capital fight to keep new laws from being passed or to soften those already on the books.
“The Illinois biometric privacy act is one of the best new privacy laws in the country,” said Marc Rotenberg, president of the Electronic Privacy Information Center. “It’s bad news for consumers when Internet companies start lobbying against good privacy laws.”
Applicants will have their photograph taken at a local office and a digital copy will be submitted immediately to Springfield for comparison in a pool of several million digital photos, according to Jim Burns, inspector general for the secretary of state’s office.
“We have in Illinois one of the better facial recognition systems in the country,” he said.
Illinois is among 27 states either not in compliance or taking steps to comply with the Real ID Act. Under this act, stricter identification is required to pass through airport security and enter federal buildings. Homeland Security earlier this year postponed the deadline for states to comply to 2018.
Homeland Security also will accept the temporary paper document in conjunction with an old driver’s license or ID card to board an aircraft until the permanent card arrives in the mail.
Congress passed the law in 2005 after a 9/11 Commission recommendation to take steps that would make it tougher to counterfeit government-issued IDs.
Critics of Real ID, such as the American Civil Liberties Union, have complained that it is a blatant invasion of privacy and would make people vulnerable to identity theft.
Ed Yohnka, director of communications at American Civil Liberties Union of Illinois, said he believes Illinois and other states have been doing a good job protecting peoples’ identities, and switching to a national identification card would do more harm than good.
“Congress ought to pull the plug on this,” he said. “It creates a national identification system that puts people at a greater risk of having their identity stolen.
“They talk about this in terms of it being for safety and security, but there is no evidence that it adds any of those things,” Yohnka said. “But what we do know is that it creates this powerful dynamic that can be used for surveillance.
“Once you have this national database, the only natural thing to do next is to take it and begin to use it to track people,” Yohnka said. “Then you are just creating a huge surveillance system, and that’s the real danger.”
Yohnka said if Real ID is developed, the government would have the potential to track what people buy and where they go.
Even with the new procedures, IL is still only 84% in compliance, whatever that really means. And by the way, for a state already in budgetary trouble, here’s an extra expense:
The system will cost the state an additional $8.3 million in vendor and postage costs a year, said Nathan Maddox, [ Illinois Secretary of State Jesse] White’s senior legal adviser. The state plans to use a fund dedicated to driver’s license upgrades to pay for the new system.
“We have been making steady progress in implementing Real ID,” Maddox said. “We’ve met approximately 84 percent of the requirements.”
Illinois Secretary of State Jesse White announced that his office is upgrading security features to the Driver’s License/ID card design and expanding the central issuance process for driver’s licenses and ID cards to all applicants. With implementation of these changes, Illinois has moved closer to achieving full REAL ID compliance, which is a federal mandate of the U.S. Department of Homeland Security (DHS). By the end of July, applicants visiting Driver Services facilities will no longer be issued a new permanent DL/ID card at the end of the application process. Instead, they will leave the facility with a temporary, secure paper driver’s license, which is valid for 45 days and will serve as their DL/ID for driving purposes and proof of identification. The temporary, secure paper driver’s license or ID card will contain a photo and the basic information that appears on the permanent driver’s license or ID card. In addition, the facility employee will return the old DL/ID card back to the applicant after punching a hole in it.
Meanwhile, the applicant’s information will be sent to a centralized, secure facility in Illinois. After fraud checks have been conducted to ensure the applicant’s identity, a higher quality, more secure DL/ID will be printed and sent via U.S. mail within 15 business days to the applicant’s address.
For purposes of air travel, DHS states that it will accept the temporary document in conjunction with the old DL/ID to board an aircraft until the permanent card arrives in the mail. Illinois joins 39 other states that have moved to centralized production of DL/ID cards.
Illinois DL/IDs will continue to be accepted as primary forms of identification to board commercial airplanes for domestic travel until January 22, 2018.
Fine, whatever, as long as the damn thing doesn’t get lost in the maw of the unreliable Chicago mail – seriously, what percentage of these DL/ID cards will be left to burn under a dumpster?
What percent will be delivered to the wrong address? I’d estimate that our building gets several erroneously delivered pieces of mail a week. Often inconsequential direct mail, but often checks, invoices, utility bills, magazines, and so on. Let’s hope the Chicago branch of the USPS takes special care to deliver these new driver licenses…
A few scraps of news discovered on my browser recently. Or is it in my browser?
Federal Bureau of Investigation Chicago Division…
Jimmy Comey, FBI director, seems to be of the mind that the only way that police can do their jobs is if they are allowed to be a military invading force, civil liberties be damned. If a cop is worried about his actions being controversial, perhaps the actions are the problem, not the videotape? Comey must want to be fired, the last time this topic came up, the White House vehemently disagreed via multiple channels. What will happen this time? Also am heartened to read the comments to this article, for once, 90% of the comments are thoughtful, and most agree that Comey is way out of line.
The director of the F.B.I. reignited the factious debate over a so-called “Ferguson effect” on Wednesday, saying that he believed less aggressive policing was driving an alarming spike in murders in many cities.
James Comey, the director, said that while he could offer no statistical proof, he believed after speaking with a number of police officials that a “viral video effect” — with officers wary of confronting suspects for fear of ending up on a video — “could well be at the heart” of a spike in violent crime in some cities.
“There’s a perception that police are less likely to do the marginal additional policing that suppresses crime — the getting out of your car at 2 in the morning and saying to a group of guys, ‘Hey, what are you doing here?’” he told reporters.
The FBI wants free reign to watch you, however, by installing malware on your devices at their whim, without even a warrant…
n an interview with Gizmodo, Senator Ron Wyden revealed that he’ll introduce legislation next week that, if passed, would stop the recent Supreme Court change to what’s known as “Rule 41,” which gave the government broader hacking power.
The Department of Justice has been pushing for the rule change for years, and it was finally granted by the Supreme Court in April. The new rule allows federal judges to grant warrants to agencies like the FBI to deploy “Network Investigative Techniques” (malware) to search any number of computers, be it 10 or 100,000, even if they don’t know what jurisdiction the computers are in. The rule change also allows judges to grant warrants to search the computers of victims of cybercrime, even if that person hasn’t been suspected of a crime. Congress has six months to oppose the rule change or else it will automatically go into effect.
Then there’s the question of infecting computers with malware in order to search them. In an interview with Gizmodo, Senator Wyden aired his concerns.
“By compromising computer systems, it could leave it open to other attackers. What if the government has to turn off the computer’s protections to search it?,” he said. “So if the government is out there turning of millions of security features in order to search computers, my view is that there could be some serious security threats.”
The legislation Wyden plans to introduce next week will be just one sentence, simply stating that the changes to rule 41 will not go into effect.
“What I hope is that the House and Senate Judiciary committees will start looking into the rule,” Wyden said. “They’ll start looking at our bill and and Senators would realize that this is the question for the Congress. An agency like the Department of Justice shouldn’t just be able to wave its arms around and grant itself vast new powers. The changes to rule 41 dramatically expand the government’s hacking authority.”
or your Amazon Echo, if you are foolish enough to own one…
Back in March, I filed a Freedom of Information request with the FBI asking if the agency had ever wiretapped an Amazon Echo. This week I got a response: “We can neither confirm nor deny…”
We live in a world awash in microphones. They’re in our smartphones, they’re in our computers, and they’re in our TVs. We used to expect that they were only listening when we asked them to listen. But increasingly we’ve invited our internet-connected gadgets to be “always listening.” There’s no better example of this than the Amazon Echo.
Philosophy, and most Liberal Arts programs, in my experience, are weighted heavily towards Europe, mostly Northern Europe really.
The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.
Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
This is not to disparage the value of the works in the contemporary philosophical canon: Clearly, there is nothing intrinsically wrong with philosophy written by males of European descent; but philosophy has always become richer as it becomes increasingly diverse and pluralistic. Thomas Aquinas (1225-1274) recognized this when he followed his Muslim colleagues in reading the work of the pagan philosopher Aristotle, thereby broadening the philosophical curriculum of universities in his own era. We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.
An interesting and brief history of the purple bag that Crown Royal Whiskey is sold with:
If you’ve ever bought a bottle of Crown Royal Canadian whisky, you know the iconic bag, that ubiquitous purple “velvet” satchel with gold stitching and tasseled drawstring. Nearly everyone has one, even if they’re unsure where it is, or even how they got it. They’re impossible to throw away, and are just the right size, perfect for, say, a camera lens, weed stash, or as a relative used it for, an old set of dentures. Heck, I had one moons before I even knew about the whisky, and was probably using it to store Tiddlywinks, or my Indian Head pennies.
The bag does go back generations. In fact, the Canadian distillery’s first batch of hooch was blended in 1939 for the premier visit to the Americas by none other than England’s King George VI and his wife, Queen Elizabeth. No reigning British monarch had ever set foot on the continent. Upon hearing of the impending visit, Seagrams Chairman Samuel Bronfman sought to create a whisky, well, suitable for a king. He was said to have sampled six hundred blends before approving the recipe, the etched-glass crown-shape bottle and cap and now-venerable purple bag, the color chosen to imbue royalty.
For many subsequent years, the purple bag and its contents remained under wraps in Canada. That ended in the 1960s, when some enterprising Canadians, having packed some purple pouches, headed for oil-rich Texas. After that the blended whisky and their bags were also sold in the United States.
Instagram 8 introduced a new logo. I’m meh about it, I don’t like it, but I’m not having a tantrum. I do use Instagram a few times a week, by the way, here’s my page. Anyway, a discussion of the logo change itself is more interesting:
The skeuomorphic camera icon that has accompanied Instagram until today is a modern-day classic. Not because it’s good — it’s not, really — but because of its omnipresence in users’ phone screens. I bet it’s on the home screen of 99% of people who have the app and who tap it very regularly. When the iPhone first came out — if you’ll remember — skeuomorphism was the default aesthetic and now, for better or worse, it’s all about flat design with a dash of optional gradients so it’s no surprise that’s where Instagram has headed. If there was any surprise it’s that Instagram held on to the skeuomorphism for a relatively long five years.
I doubt anyone will be making cakes and cookies in the shape of the new Instagram logo and that’s the biggest problem the new logo faces: it’s not the old logo. The ensuing shitstorm on the internet today will be epic. About 75% of the negative reaction will be simply to the fact that it has changed and the other 25% will be to the not-quite-fact that there is a generic aesthetic to the new icon where it could be a “camera” icon for the upcoming smart microwave from Apple or whatever other user interface you would imagine. This is not to say it’s a bad-looking icon, no… as far as camera icons go, this is quite lovely and has the minimal amount of elements necessary to be recognized as a camera BUT not the minimal amount of elements necessary to be recognized as Instagram.
Trump is so thin skinned, I can’t even make a joke about it:
Donald Trump’s campaign requires volunteers to sign a contract that forbids them from criticizing the Republican presidential front-runner, his family members, any Trump businesses or products, or his campaign. The six-page contract, reviewed in full by the Daily Dot, theoretically lasts for the entirety of a volunteer’s life.
Legal experts say, however, that the contract’s non-disparagement clause would likely never hold up in court.
The tight control of volunteers stands in stark contrast to not only American political-campaign norms but also Trump’s reputation for speaking his mind.
In addition to forbidding volunteers from disparaging Trump, the contract also includes a sentence that demands volunteers prevent their employees from criticizing Trump, thus making volunteers responsible for the free speech of others for an indeterminate amount of time.
Volunteers also sign a non-disclosure agreement, forbidding them from sharing any sensitive information from the campaign. What kind of information is sensitive or confidential is completely at Trump’s discretion, according to the contract. “He’s apparently so afraid that people would say something bad about him after spending some time on his campaign that they have to sign some sort of agreement,” Perry explained. “I don’t see how this stands up. I don’t see how a court enforces this.”
Volunteers must also sign a non-compete agreement that extends until Trump ceases his campaign for president, identified in the contract as the “Non-Compete Cutoff Date.” The agreement also forbids volunteers from working for another presidential candidate, should they change their minds.
In the event of a Trump victory in November’s general election, the non-compete clause could extend until his 2020 reelection campaign or even 2024, at the end of a second Trump term, the document explains. If Trump loses but wants to run again in the next election or in any presidential election in the future, the contract states the volunteer cannot work for another candidate.