Prosectors are shifting their focus to warrantless cell-tower locational tracking of suspects in the wake of a Supreme Court ruling that law enforcement should acquire probable-cause warrants from judges to affix GPS devices to vehicles and monitor their every move, according to court records.
The change of strategy comes in the case the justices decided in January, when it reversed the life sentence of a District of Columbia area drug dealer, Antoine Jones, who was the subject of 28 days of warrantless GPS surveillance via a device the FBI secretly attached to his vehicle without a warrant. In the wake of Jones’ decision, the FBI has pulled the plug on 3,000 GPS tracking devices.
In a Friday filing in pre-trial proceedings of Jones retrial, Jones attorney’ said the government has five months worth of a different kind of locational tracking information on his client: So-called cell-site information, obtained without a warrant, chronicling where Jones was when he made and received mobile phone calls in 2005.
“In this case, the government seeks to do with cell site data what it cannot do with the suppressed GPS data,” attorney Eduardo Balarezo wrote U.S. District Judge Ellen Huvelle.
The government has produced material obtained through court orders for the relevant cellular telephone numbers. Upon information and belief, now that the illegally obtained GPS data cannot be used as evidence in this case, the government will seek to introduce cell site data in its place in an attempt to demonstrate Mr. Jones’ movements and whereabouts during relevant times. Mr. Jones submits that the government obtained the cell site data in violation of the Fourth Amendment to the United States Constitution and therefore it must be suppressed.
Just as the lower courts were mixed on whether the police could secretly affix a GPS device on a suspect’s car without a warrant, the same is now true about whether a probable-cause warrant is required to obtain so-called cell-site data.
A lower court judge in the Jones case had authorized the five months of the cell-site data without probable cause, based on government assertions that the data was “relevant and material” to an investigation.
“Knowing the location of the trafficker when such telephone calls are made will assist law enforcement in discovering the location of the premises in which the trafficker maintains his supply narcotics, paraphernalia used in narcotics trafficking such as cutting and packaging materials, and other evident of illegal narcotics trafficking, including records and financial information,” the government wrote in 2005, when requesting Jones’ cell-site data.
That cell-site information was not introduced at trial, as the authorities used the GPS data for the same function.
The Supreme Court tossed that GPS data, along with Jones’ conviction, on January 23.
The justices agreed to decide Jones’ case in a bid to settle conflicting lower-court decisions — some of which ruled a warrant was necessary, while others found the government had unchecked GPS surveillance powers.
“We hold that the government’s installation of a GPS device on a target’s vehicle, and its use of that device to monitor the vehicle’s movements, constitutes a ‘search,’” Justice Antonin Scalia wrote for the five-justice majority.
The government has maintained in a different case on appeal that cell-site data is distinguishable from GPS-derived data. District of Columbia prosecutors are expected to lodge their papers on the issue by April 6 in the Jones case.
Among other things, the government maintains Americans have no expectation of privacy of such cell-site records because they are “in the possession of a third party” — the mobile phone companies. What’s more, the authorities maintain that the cell site data is not as precise as GPS tracking and, “there is no trespass or physical intrusion on a customer’s cell phone when the government obtains historical cell-site records from a provider.”
In the Jones case, the Supreme Court agreed with an appeals court that Jones’ rights had been violated by the month-long warrantless attachment of a GPS device underneath his car. Scalia’s majority opinion, which was joined by Chief Justice John Roberts, and Justices Anthony Kennedy, Clarence Thomas and Sonia Sotomayor, said placing the device on the suspect’s car amounted to a search.
In a society where privacy is constantly eroding, recent efforts by some employers to pry into Facebook pages to investigate job applicants should be resisted as an unwarranted intrusion on personal freedom and dignity.
Some employers have recently begun requiring job applicants to provide their Facebook user names and passwords so they can review whatever the applicant has posted privately online. Others — so that the password remains secret — are requiring applicants to access their Facebook pages during the job interview to allow the interviewer to see the content.
While the applicant can say no and withdraw from further consideration, at a time of high unemployment — 9.6 percent in Florida — the request is inherently economically coercive. Applicants will often say yes, reluctantly, hoping to land a job.
More than 156 million Americans use Facebook — and Florida has the fourth largest number of users — around 9.5 million. And as anyone who's been on Facebook knows, users often post excruciatingly personal information.
While Facebook has threatened to go to court against employers to protect its users' privacy, it is far from clear whether this new practice is illegal.
The Florida Supreme Court first recognized a common law cause of action for damages for tortious invasion of privacy in 1944. However, one may not complain of acts to which he or she has consented, and an employer's request would likely not be illegal if the applicant freely and knowingly consented to it, and if the policy is applied to all applicants without discrimination.
Nonetheless, the American Civil Liberties Union believes it is an invasion of privacy for employers to insist on looking at people's private Facebook pages as part of the job application process. And some critics suggest the practice is a violation of First Amendment rights, as well as the Stored Communications Act and Computer Fraud and Abuse Act — federal statutes that prohibit access to electronic information and computers without proper authorization.
Two U.S. senators, Chuck Schumer, D-NY, and Richard Blumenthal, D-CT, have asked Attorney General Eric Holder Jr. and the federal Equal Employment Opportunity Commission to investigate whether such employer practices violate federal laws. Legislators in several states also have introduced bills seeking to outlaw the practice.
Legal or not, this practice adds another ugly tool to the panoply already used by employers to investigate job applicants, one likely to generate bad employee morale and damage an employer's reputation...
Has the moment finally arrived when all those warnings by privacy scolds finally hits home for millions of users of social networks who reveal all sorts of personal information for all to see over the Internet? On a Friday in late March, a perfectly legit but "creepy" mobile app called Girls Around Me had sparked a rollicking conversation about the dark side of our socially connected lives.
The app, how it works, and the shocked reaction of his female friends to what it does were all detailed in Cult of Mac reporter John Brownlee's long article on Girls Around Me, which takes publicly available information from FourSquare and Facebook, mashes it up, and presents users with a map showing where nearby females are located plus a quick link to their Facebook profiles.
For many, that simple app probably doesn't seem like a likely candidate for tipping point in the ongoing privacy awareness campaign being waged by fuddy-duddies like the "Linux aficionado" described in Brownlee's piece who reacted with "comical smugness" when the reporter showed a group of friends what the app was revealing about blissfully unaware women in the vicinity.
But it was the "less computer-affable" of Brownlee's friends, mostly female, whose reactions as described by the reporter caused such a commotion over privacy concerns. The meat of Brownlee's piece isn't so much the nuts-and-bolts of how Girls Around Me works, but what seeing it in action meant to women who might have been concerned about online privacy in a theoretical way but had probably never seen in such plain terms what telling the Internet everything about yourself, down to your actual physical location, really means.
The reaction to those women's reactions to Girls Around Me has clearly been pretty potent. The application for mobile devices like the iPhone was being called "creepy" and a "stalker app" on Friday afternoon. Within a few hours of the Cult of Mac publishing the story, FourSquare informed the site that it had "killed Girls Around Me's API access to their data, effectively knocking the app out of commission."
But as Brownlee notes in the original story, the problem isn't so much Girls Around Me as it is the huge numbers of people who simply don't take care to protect their privacy while participating in social networking. He characterizes the developers at i-Free, the Russian company that makes Girls Around Me as "super nice" guys who "certainly don't mean for this app to be anything beyond a diversion."
Once you're on the page, simply upload a photo (or enter an image's URL into the appropriate box) and click the "Call Method" button. A few moments later, you'll see your image overlaid with some strange markings. Hover your mouse cursor over any of the those dots and boxes to see what Face.com's API calculated. There'll be details such as the estimated age, minimum age, maximum age, gender and mood of any individuals in the image. You'll also see whether or not the software believes that a person is wearing glasses or smiling.
The image analysis will also offer percentages after each detail. Those indicate how sure the software is about any particular number or characteristic. Many factors, such as as image quality, wrinkles, facial smoothness, poses and so on will affect both that percentage as well as the final result, according to Venture Beat's Sarah Mitroff.
I tested Face.com's new API with several images of individuals whose ages I know and found that it's fairly accurate, within a three year range, in most cases — but that it can sometimes make dramatically wrong assumptions about photos of people who are making silly faces or posing in dramatic light.
Face.com chief executive Gil Hirsch probably wouldn't be surprised about these results as he did mention to Mitroff that "[o]n average, humans are much better at detecting ages than machines."
But absolutely accurate or not, Face.com's new API is certainly a fun way to waste a couple of minutes. And who knows, maybe it'll help you finally narrow down your co-worker's age range.
Tee hee... Get yr fucking camera outta my face!
Once the stuff of sci-fi and spy flicks, facial recognition technology has evolved into a concrete reality touching nearly everyone on the planet.
The technology figures prominently in post-9/11 security. According to the International Civil Aviation Organization, 93 countries now issue passports containing the bearer’s biometric facial data. A number of U.S. states use facial recognition to prevent individuals from obtaining multiple driver licenses under different names. And law enforcement agencies successfully use it to identify criminals from video footage.
In the pre-Google, pre-cloud computing era, the technology required for these facial recognition systems was exclusively in the hands of the governments and organizations that deployed them. Flash-forward ten-years and the technology is available off the shelf, biometric databases are booming and the personal information of millions of people is freely available in the cloud.
These new circumstances have prompted the International Biometrics and Identification Association (IBIA), a trade association promoting the appropriate use of identity and security technology, to raise the red flag on an impending “perfect storm.”
The IBIA warns that this perfect storm may destroy the barrier separating our online and offline identities, altering our notions of what constitutes privacy in today’s connected world.
Identification in moments
Imagine a scenario in which anyone with a mobile device could capture an image from a distance and use facial recognition software to identify the individual and access a wealth of personal information that they or others, have uploaded over the years. Researchers at Carnegie Mellon University have already done it.
In August a team led by Carnegie Mellon Professor Alessandro Acquisti reported that they had successfully combined three technologies accessible to anyone–a commercially available face recognition tool, cloud computing and public information from social network sites such as Facebook–to identify individuals online and in the physical world.
In their first experiment, Acquisti’s team was able to scan profiles on a popular online dating site and identify users–protected under pseudonyms–based on their photo. In another experiment, the team used the technology to identify individuals on the campus based on their Facebook profile photos. A third experiment found the researchers identifying students’ Social Security numbers and predicting their personal interests using a photo of the subject’s face.
“The results foreshadow a future when we all may be recognizable on the street–not just by friends or government agencies using sophisticated devices–but by anyone with a smart phone and Internet connection,” said the researchers...
The FBI once taught its agents that they can “bend or suspend the law” as they wiretap suspects. But the bureau says it didn’t really mean it, and has now removed the document from its counterterrorism training curriculum, calling it an “imprecise” instruction. Which is a good thing, national security attorneys say, because the FBI’s contention that it can twist the law in pursuit of suspected terrorists is just wrong.
“Dismissing this statement as ‘imprecise’ is a rather unsatisfying response given the very precise lines Congress and the courts have repeatedly drawn between what is and is not permissible, even in counterterrorism cases, over the past decade,” Steve Vladeck, a national-security law professor at American University, says. “It might technically be true that the FBI has certain authorities when conducting counterterrorism investigations that the Constitution otherwise forbids, but that’s good only so far as it goes.”
The reference to law-bending was noted in a letter to FBI Director Robert Mueller from Sen. Richard Durbin that Danger Room obtained. When Danger Room asked for the original document, the FBI initially declined. On Wednesday, a Bureau spokesperson relented, but refused to say who prepared the document; how long it was in circulation; and how many FBI agents, analysts and officials received its instruction.
The undated piece of instructional material (.pdf) notes that “under certain circumstances, the FBI has the ability to bend or suspend the law to impinge on the freedom of others.” Those circumstances include “the ability to gather information on individuals which would normally be protected under the U.S. Constitution through the use of FISA [the Foreign Intelligence Surveillance Act], Title 3 monitoring [general law enforcement surveillance], NSL [National Security Letter] reports, etc.”
Some surveillance experts were confused by that explanation. Surveillance under the Foreign Intelligence Surveillance Act or so-called “Title-3″ law-enforcement surveillance requires the approval of judges. National Security Letters — administrative subpoenas for records issued by FBI officials, not judges — are troubling to civil libertarians, as the practice is rife for abuse, but the issuance of the letters themselves is legal. In other words, there shouldn’t be any suspension of the law.
“This certainly does not read as if a lawyer wrote it,” says Robert Chesney, a national-security expert at the University of Texas’ law school. “Congress has given the FBI the authority to wiretap, collect business records, and gather other forms of information for intelligence purposes, subject to certain safeguards. It is a severe misstatement to refer to the exercise of these lawful authorities as ‘bending’ or ‘suspending’ the law; that mischaracterization runs the risk of both delegitimizing these lawful tools and, simultaneously, conveying to agents the mistaken impression that there might be some more general power to disobey the law during intelligence investigations.”
The FBI discovered the document, removed it from its curriculum, and allowed aides to the Senate Judiciary Committee to examine it as part of a six-month review into improper counterterrorism training spurred by Danger Room’s reporting. It was among hundreds of pages of training material — out of 160,000 reviewed, the FBI says — that the FBI took out of circulation for “imprecision”; inaccuracy; reliance on racial, ethnic or religious stereotypes; or conflating illegal behavior with constitutionally protected activities. No FBI official responsible for any of the discarded training material received disciplinary action.
A Fredericksburg man faces two counts of assault for allegedly pointing his finger at police officers, another example of how any behavior except complete subservience to law enforcement is now being treated as a crime.
David Loveless, who has no criminal record, was arrested and handcuffed last week after he allegedly made a hand gesture at police who had testified against his son in a robbery case.
He now faces two counts of assault on a law enforcement officer by way of intimidation and two counts of obstruction of justice.
Police spokesperson Natatia Bledsoe claimed Loveless made a gun gesture at police officers, but Loveless denies making any kind of gesture at all.
“I don’t see how I was pointing my finger,” Loveless told ABC7. ” If anything I was reaching into my pocket to get a pack of cigarettes. If that’s what they saw, they have a vivid imagination.”
As we have previously highlighted, almost weekly there is a new case of someone being arrested and charged with assaulting a police officer merely for speaking out, making a gesture, or attempting to protect themselves.
Indeed, in some cases a person who is brutally beaten by cops is subsequently charged with assaulting a police officer.
Last year we reported on a case in which Dayton police tasered, pepper-sprayed and beat a mentally handicapped teen and then charged him with assault because the officers took the boy’s speech impediment as “a sign of disrespect”.
17-year-old Jesse Kersey was charged with “assault on a peace officer, resisting arrest, and obstructing official business,” after he became confused when police started asking him questions. Kersey was tased and punched as cops threatened to arrest neighbors who tried to tell them the boy was mentally handicapped.
Not showing complete fealty to cops is now treated as “disrespect” and punishable by a beat down. Having your head smashed in by cops also now qualifies as you assaulting them...
Fuck da police. Seriously, what kind of shit is this?
Last week Amazon, the online retailer, announced it was buying a robot maker called Kiva Systems for $775 million in cash. Before you get excited that Amazon may offer a robot that can tuck you into bed at night and read Kindle books to you, this isn’t that kind of robot company. Instead, Kiva Systems’ orange robots are designed to move around warehouses and stock shelves.
Or, as the company says on its Web site, using “hundreds of autonomous mobile robots,” Kiva Systems “enables extremely fast cycle times with reduced labor requirements.”
In other words, these robots will most likely replace human workers in Amazon’s warehouses.
Is this one more step, a quickening step, toward the day when robots put many of us out of work? Most roboticists don’t see the coming robot invasion that way.
Michael Kutzer and Christopher Brown, robotics research engineers with the Johns Hopkins University Applied Physics Laboratory, explained that current robots are being designed to work alongside people, not replace them, in the work force.
For example, the researchers are working on miniature robots just a quarter of an inch wide that could help doctors go inside bone during surgery, and another robot, about the size of a deck of cards, could help in hostage situations by allowing police to inconspicuously scan a room.
In each instance, humans are still needed to control the robots. The two engineers believe Amazon’s new robots will do the same thing: helping speed things up in the warehouses.
Robots have been in factories for decades. But increasingly we will see them out in the open. Already little ones — toys, really — sweep floors. But they are getting better at doing what we do. Soon, if Google’s efforts to create driverless cars are successful, cab drivers, cross-country truckers and even ambulance drivers could be out of a job, replaced by a computer in the driver’s seat.
We are starting to see robots on the battlefield. We could eventually have robot police officers and firefighters, robotic guides, robot doctors, maybe even robotic journalists.
When I asked Mr. Brown if robots would eventually take on a broader role in the work force and possibly replace workers, he said, “It is much more likely, for now, that robots will help augment people’s abilities, allowing us to use robots for things humans can’t do.” Even if that changes, he added, “you’ll have to have someone who builds the robots.”
But Mr. Brown and Mr. Kutzer have advanced degrees in engineering. They will most likely never struggle to find work. The guy in the Amazon warehouse with a year toward an associate’s degree at a community college is a different story. It is unlikely that he is going to build robots if he is put out of work.
Yet those who are paving the way to a world with robots don’t see it that way. “Those who lose jobs to robots will have an incentive to acquire skills that are currently beyond the skills of robots — and there are many human skills that will not be surpassed soon by robots,” explained Colin Allen, co-author of the book “Moral Machines” and a professor of cognitive science at Indiana University...
Welcome to the next generation in surveillance technology. A Japanese company, Hitachi Kokusai Electric, has unveiled a novel surveillance camera that is able to capture a face and search up to 36 million faces in one second for a similar match in its database.
While the same task would typically require manually sifting through hours upon hours of recordings, the company´s new technology searches algorithmically for a facial match. It enables any organization, from a retail outlet to the government, to monitor and identify pedestrians or customers from a database of faces.
Hitachi’s software is able to recognize a face with up to 30 degrees of deviation turned vertically and horizontally away from the camera, and requires faces to fill at least 40 pixels by 40 pixels for accurate recognition. Any image, whether captured on a mobile phone, handheld camera, or a video still, can be uploaded and searched against its database for matches.
“This high speed is achieved by detecting faces through image recognition when the footage from the camera is recorded, and also by grouping similar faces,” Seiichi Hirai, Hitachi Kokusai Electric researcher told DigInfo TV...
The U.S. intelligence community will now be able to store information about Americans with no ties to terrorism for up to five years under new Obama administration guidelines.
Until now, the National Counterterrorism Center had to immediately destroy information about Americans that was already stored in other government databases when there were no clear ties to terrorism.
Giving the NCTC expanded record-retention authority had been called for by members of Congress who said the intelligence community did not connect strands of intelligence held by multiple agencies leading up to the failed bombing attempt on a Detroit-bound airliner on Christmas 2009.
“Following the failed terrorist attack in December 2009, representatives of the counterterrorism community concluded it is vital for NCTC to be provided with a variety of datasets from various agencies that contain terrorism information,” Director of National Intelligence James Clapper said in a statement late Thursday. “The ability to search against these datasets for up to five years on a continuing basis as these updated guidelines permit will enable NCTC to accomplish its mission more practically and effectively.”
The new rules replace guidelines issued in 2008 and have privacy advocates concerned about the potential for data-mining information on innocent Americans.
“It is a vast expansion of the government’s surveillance authority,” Marc Rotenberg, executive director of the Electronic Privacy Information Center, said of the five-year retention period.
The government put in strong safeguards at the NCTC for the data that would be collected on U.S. citizens for intelligence purposes, Rotenberg said. These new guidelines undercut the Federal Privacy Act, he said.
“The fact that this data can be retained for five years on U.S. citizens for whom there’s no evidence of criminal conduct is very disturbing,” Rotenberg said.
“Total Information Awareness appears to be reconstructing itself,” Rotenberg said, referring to the Defense Department’s post-9/11 data-mining research program that was killed in 2003 because of privacy concerns...
When you go outside or go to other public places such as a bank or a mall, have you automatically given up your Fourth Amendment rights and consented to a search? When it comes to tracking you via facial recognition technology, what if the government or other law enforcement were to use that argument, that by simply being in a place where there are security cameras, you waived your Fourth Amendment rights and consented to a search?
The FBI and DOD sponsored a legal series about the U.S. government using facial recognition; the latest forum was titled "Striking the Balance - A Government Approach to Facial Recognition Privacy and Civil Liberties." Whenever the word 'balance' is used, privacy and civil liberties are usually about to be kicked in the name of 'security.' When it comes to surveillance via facial recognition technology, federal law enforcement, intelligence personnel and national security agencies are looking into the "gaps in legal/policy authority that may result in privacy and civil liberties vulnerabilities if left unaddressed."
The Future of Privacy Forum (FPF) Senior Fellow Peter Swire, also a law professor at Ohio State University, spoke about "Facial Recognition by the Government: Privacy and Civil Liberties Issues." Since using "one's facial image, with or without knowledge or consent," can identify and be used to track a person "an inherent tension exists between privacy and facial recognition." The forum was to "examine where the appropriate balance lies between crime and terrorism prevention using facial recognition and robust privacy safeguards." Swire started with two different perspectives about facial recognition, according to FPF.
1) It has always been legal to observe people in public, and facial recognition technology is simply making this easier.
2) Facial recognition technology allows an unprecedented ability to surveil and track people, and this information could be stored indefinitely and correlated with other personal information.
Although "observing a person in public has traditionally not required a warrant," Professor Swire pointed out Fourth Amendment rights figure heavily into the constitutional issues impacting facial recognition tracking. Swire said the Supreme Court's GPS tracking decision "may dramatically impact privacy by requiring law enforcement agents to obtain a warrant to conduct surveillance on suspects in public, something law enforcement has never had to do. However, the fourth amendment contains a consent exception; if an individual consents to a search, a warrant is not required. Professor Swire pointed out that some might argue that individuals consent to going outside or to other public places (i.e. a bank or mall) where security cameras are present."
When the U.S. Supreme Court ruled on U.S. vs. Jones GPS tracking, Justice Sotomayor made a strong case for updating our Fourth Amendment laws to protect privacy in this digital age. Sotomayor, in discussing GPS tracking, wrote, "by making available at a relatively low cost such a substantial quantum of intimate information about any person whom the Government, in its unfettered discretion, chooses to track-may 'alter the relationship between citizen and government in a way that is inimical to democratic society'."
Professor Swire referenced Justice Sotomayor's worries "that constant surveillance by the government could chill free speech and free association." Constant biometric surveillance such as facial recognition technology may also "lead to discrimination." Just because something is legal does not mean it should be done, he advised the government. How should intelligence agencies "determine whether or not a surveillance program is a good idea?" Swire suggested using the New York Times test: "if the program was detailed on the front page of the New York Times, would the public reaction be negative or positive?"
The military waited six days before releasing the name of U.S. Army Staff Sgt. Robert Bales, accused of killing 16 Afghan civilians earlier this month. One of the reasons for the somewhat unusual delay: to give the military enough time to erase the sergeant from the internet — or at least try to.
That’s according to several Pentagon officials who spoke on the condition of anonymity to McClatchy newspapers about the subject. The scrubbed material included photographs of Bales from the military’s official photo and video distribution website, along with quotes by the 38-year-old sergeant in the Joint Base Lewis-McChord newspaper regarding a 2007 battle in Iraq “which depicts Bales and other soldiers in a glowing light.”
The sergeant’s wife, Karilyn Bales, and their two young children were also moved onto Lewis-McChord, reportedly for their protection. Her blog, titled “The Bales Family” about her life as a mother and military spouse, was removed although it’s not known how, precisely. The military’s reasoning for the blackout: protecting the privacy of the accused and his family.
“Protecting a military family has to be a priority,” a Pentagon official told McClatchy. “I think the feeding frenzy we saw after his name was released was evidence that we were right to try.”
Try as they might, the military couldn’t completely scrub Bales from the web. What you put online lasts pretty much forever, and that’s no different for the military. Reporters quickly discovered cached versions of Bales’ photograph, the quotes from his base newspaper and the family blog. “Of course the pages are cached; we know that,” the official added. “But we owe it to the wife and kids to do what we can.”
But as McClatchy points out, the military didn’t hesitate to release the name of Major Nidal Hasan, who killed 13 people in a 2009 shooting at Fort Hood, Texas. (Though Hasan was unmarried and had no children.)
Bales’ killings of Afghan civilians also potentially maimed the U.S.’s war plans.
While not a special forces operator, Bales was working alongside U.S. commandos at a small combat outpost set up for “village support operations” in Afghanistan’s Panjwai district, within the country’s restive Kandahar province. Effectively, establish ties with the district’s elders in the hope of warding off Taliban infiltration and influence. And with distrust of U.S. forces in the wake of the massacre, the mission to stabilize the district’s villages may have become more difficult.
The massacre also raises questions about the military’s awful record of diagnosing and treating (or mismanaging) traumatic brain injuries, which Bales reportedly suffered during a 2010 car accident.
“Any time there’s a very public issue, people want to know what’s going on at the higher level with authority,” Elizabeth Buchanan, director of the University of Wisconsin’s Center for Applied Ethics, told the newspaper company. “So when all of sudden it’s made public, I don’t think people immediately go to the thought, “Well, they’re protecting this individual.’ There’s a societal stance of, ‘Well, what is it they’re hiding?’”
Adverts could soon be tailored according to the background noise around you when using your smartphone, if a patent application by Google becomes reality.
The search engine giant has filed for a patent called ‘Advertising based on environmental conditions’.
As that title implies, it’s not just background sounds that could be used to determine what adverts you seen on your mobile phone. The patent also describes using ‘temperature, humidity, light and air composition’ to produced targeted adverts.
The application said: ‘A web browser or search engine located at the user's site may obtain information on the environment (e.g., temperature, humidity, light, sound, air composition) from sensors.
‘Advertisers may specify that the ads are shown to users whose environmental conditions meet certain criteria.
‘For example, advertisements for air conditioners can be sent to users located at regions having temperatures above a first threshold, while advertisements for winter overcoats can be sent to users located at regions having temperatures below a second threshold.’
Google has come under fire recently with users becoming increasingly concerned about its attitude to privacy and perceived obsession with making money.
President Obama signed an Executive Order for “National Defense” yesterday that claims executive authority to seize all US resources and persons, including during peacetime, for self-declared “national defense.”
The EO claims power to place any American into military or “allocated” labor use.
“American exceptionalism” is the belief that a 200+ year-old parchment in the National Archives has magical powers to somehow guarantee limited government from 1% tyranny, despite the specific and clear warnings of the US Founders, despite world history of repeated oligarchic/1% tyranny claiming to be for the “good of the people,” and despite US history’s descent into vicious psychopathy hidden in plain view with paper-thin corporate media propaganda.
Sweden was the first European country to introduce bank notes in 1661. Now it's come farther than most on the path toward getting rid of them.
"I can't see why we should be printing bank notes at all anymore," says Bjoern Ulvaeus, former member of 1970's pop group ABBA, and a vocal proponent for a world without cash.
The contours of such a society are starting to take shape in this high-tech nation, frustrating those who prefer coins and bills over digital money.
In most Swedish cities, public buses don't accept cash; tickets are prepaid or purchased with a cell phone text message. A small but growing number of businesses only take cards, and some bank offices — which make money on electronic transactions — have stopped handling cash altogether.
"There are towns where it isn't at all possible anymore to enter a bank and use cash," complains Curt Persson, chairman of Sweden's National Pensioners' Organization.
He says that's a problem for elderly people in rural areas who don't have credit cards or don't know how to use them to withdraw cash.
The decline of cash is noticeable even in houses of worship, like the Carl Gustaf Church in Karlshamn, southern Sweden, where Vicar Johan Tyrberg recently installed a card reader to make it easier for worshippers to make offerings.
"People came up to me several times and said they didn't have cash but would still like to donate money," Tyrberg says.
Bills and coins represent only 3 percent of Sweden's economy, compared to an average of 9 percent in the eurozone and 7 percent in the U.S., according to the Bank for International Settlements, an umbrella organization for the world's central banks.
Three percent is still too much if you ask Ulvaeus. A cashless society may seem like an odd cause for someone who made a fortune on "Money, Money, Money" and other ABBA hits, but for Ulvaeus it's a matter of security.
After his son was robbed for the third time he started advocating a faster transition to a fully digital economy, if only to make life harder for thieves.
"If there were no cash, what would they do?" says Ulvaeus, 66.
Comcast, Cablevision, Verizon, Time Warner Cable and other Internet service providers (ISPs) in the United States will soon launch new programs to police their networks in an effort to catch digital pirates and stop illegal file-sharing.
Major ISPs announced last summer that they had agreed to take new measures in an effort to prevent subscribers from illegally downloading copyrighted material, but the specifics surrounding the imminent antipiracy measures were not made available. Now, RIAA chief executive Cary Sherman has said that ISPs are ready to begin their efforts to curtail illegal movie, music and software downloads on July 12.
“Each ISP has to develop their infrastructure for automating the system,” Sherman said during a talk at the annual Association of American Publishers meeting, according to CNET. Measures will also be taken to establish databases “so they can keep track of repeat infringers, so they know that this is the first notice or the third notice. Every ISP has to do it differently depending on the architecture of its particular network. Some are nearing completion and others are a little further from completion.”
Customers found to be illegally downloading copyrighted material will first receive one or two notifications from their ISPs, essentially stating that they have been caught. If the illegal downloads continue, subscribers will receive a new notice requesting acknowledgement that the notice has been received. Subsequent offenses can then result in bandwidth throttling and even service suspension.
The news comes shortly after the closure of file-sharing giant Megaupload and increased pressure on other networks thought to be major hubs for the illegal distribution of copyrighted materials. Some studies show that these measures have had no impact on piracy, however, so organizations like the RIAA have been lobbying for ISPs to intervene and develop systems that will allow them to police their networks and directly address subscribers who illegally download copyrighted content.
It stands a little over a meter tall and has an uncanny and rather unnerving resemblance to a small child, the iCub is a European project aimed at helping researchers steal a march on rivals.
That iCub looks like a small child is no coincidence. The open-source robot, paid in part from European Union funds, is designed to learn in the same way a human child does, through interacting with its environment.
To that end, says Professor Giorgio Metta of Genoa’s Istituto Italiano di Tecnologia, the robot has tactile hands, “that can be controlled independently, it has eyes that move independently,” before adding: “We have given it a set of features that are unique for interaction and manipulation, rather than just being able to walk.”
It has 53 motors that move the head, arms and hands, waist, and legs. It can see and hear, and has a sense of its own body configuration as well as its movement. The robot has sensorized skin so it can detect when it is touched.
Results from the research robot, and there are about 20 of the robots in existence, are shared among researchers.
Unlike a three-year old, the robot, which can crawl on its own, is connected to the rest of the world via a large “umbilical” cord that provides it with power.
So why does it look like a baby? “The idea,” says Professor Metta, “was to have a robot that can be used for research not just by roboticists. Having a robot that can be used for human-robot interaction, but is sophisticated enough to be used by control people that like to have lots of degrees of freedom. Combining all these requirements we ended up with a robot that is pretty small, still complex enough to manipulate objects, that has vision and lots of other sensors.”
Ten to twenty per cent of utterances collected by voice biometrics systems are not strong identifiers of the individual that spoke them, according to Dr. Clive Summerfield, the founder of Australian voice biometrics outfit Armorvox. Voice biometrics systems could therefore wrongly identify users under some circumstances.
Most voice biometrics implementations require users to utter a pass phrase or mention personal details as part of their authentication process. Dr. Summerfield told The Register that while a small fraction of the population, which he labels “wolves”, have voices that match many other voice prints, the need to know the pass phrase means voice biometrics systems are not likely to be casually cracked without an effort to also collect users' secret words. But he also feels that most voice biometrics systems build in tolerances for those with less distinct voice prints, therefore applying a lower authentication standard for all users.
Some of the less-effective voice prints are gathered because of ambient noise when utterances are collected. Signal clipping applied by carriers can also have the unintended consequence of reducing the quality of voice prints. Some individuals simply have generic voice prints that share qualities with many others. Summerfield labels those afflicted, for whatever reason, with poor voice prints as “goats”, in contrast with the majority of “sheep” whose voices are a strong authentication token...
Two high tech companies have teamed up to create the world’s first system that can detect gunshots and identify the shooter’s face.
To give law enforcement agencies an extra edge in fighting crime, Safety Dynamics, Inc. and FaceFirst are working to integrate their two leading technologies. Together they will combine Safety Dynamics’ gunshot detection technology with FaceFirst’s facial recognition processor to identify a shooter’s identity in real-time.
Under the combined system, when a gunshot is detected Safety Dynamics' ballistic acoustic sensors can immediately pinpoint the source of the shot and direct a high-resolution camera to zoom in on the exact location. The shooter's face will be captured and FaceFirst's biometrics processor will then compare the shooter's face against existing databases to determine their identity, or if they are unknown, create a new facial record...
After 244 years, the Encyclopaedia Britannica is going out of print.
Those coolly authoritative, gold-lettered reference books that were once sold door-to-door by a fleet of traveling salesmen and displayed as proud fixtures in American homes will be discontinued, company executives said.
In an acknowledgment of the realities of the digital age — and of competition from the Web site Wikipedia — Encyclopaedia Britannica will focus primarily on its online encyclopedias and educational curriculum for schools. The last print version is the 32-volume 2010 edition, which weighs 129 pounds and includes new entries on global warming and the Human Genome Project.
“It’s a rite of passage in this new era,” Jorge Cauz, the president of Encyclopaedia Britannica Inc., a company based in Chicago, said in an interview. “Some people will feel sad about it and nostalgic about it. But we have a better tool now. The Web site is continuously updated, it’s much more expansive and it has multimedia.”
In the 1950s, having the Encyclopaedia Britannica on the bookshelf was akin to a station wagon in the garage or a black-and-white Zenith in the den, a possession coveted for its usefulness and as a goalpost for an aspirational middle class. Buying a set was often a financial stretch, and many families had to pay for it in monthly installments.
But in recent years, print reference books have been almost completely overtaken by the Internet and its vast spread of resources, including specialized Web sites and the hugely popular — and free — online encyclopedia Wikipedia...
Great idea: get rid of all the books and hook us on Wikipedia. Then kill the power and nobody will have access to knowledge. Brilliant!
The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus.
Technically I suppose Google has always been an advertising company, but for the better part of the last three years, it didn’t feel like one. Google was an ad company only in the sense that a good TV show is an ad company: having great content attracts advertisers.
Under Eric Schmidt ads were always in the background. Google was run like an innovation factory, empowering employees to be entrepreneurial through founder’s awards, peer bonuses and 20% time. Our advertising revenue gave us the headroom to think, innovate and create. Forums like App Engine, Google Labs and open source served as staging grounds for our inventions. The fact that all this was paid for by a cash machine stuffed full of advertising loot was lost on most of us. Maybe the engineers who actually worked on ads felt it, but the rest of us were convinced that Google was a technology company first and foremost; a company that hired smart people and placed a big bet on their ability to innovate.
From this innovation machine came strategically important products like Gmail and Chrome, products that were the result of entrepreneurship at the lowest levels of the company. Of course, such runaway innovative spirit creates some duds, and Google has had their share of those, but Google has always known how to fail fast and learn from it.
In such an environment you don’t have to be part of some executive’s inner circle to succeed. You don’t have to get lucky and land on a sexy project to have a great career. Anyone with ideas or the skills to contribute could get involved. I had any number of opportunities to leave Google during this period, but it was hard to imagine a better place to work.
But that was then, as the saying goes, and this is now.
It turns out that there was one place where the Google innovation machine faltered and that one place mattered a lot: competing with Facebook. Informal efforts produced a couple of antisocial dogs in Wave and Buzz. Orkut never caught on outside Brazil. Like the proverbial hare confident enough in its lead to risk a brief nap, Google awoke from its social dreaming to find its front runner status in ads threatened.
Google could still put ads in front of more people than Facebook, but Facebook knows so much more about those people. Advertisers and publishers cherish this kind of personal information, so much so that they are willing to put the Facebook brand before their own. Exhibit A: www.facebook.com/nike, a company with the power and clout of Nike putting their own brand after Facebook’s? No company has ever done that for Google and Google took it personally.
Larry Page himself assumed command to right this wrong. Social became state-owned, a corporate mandate called Google+. It was an ominous name invoking the feeling that Google alone wasn’t enough. Search had to be social. Android had to be social. You Tube, once joyous in their independence, had to be … well, you get the point. Even worse was that innovation had to be social. Ideas that failed to put Google+ at the center of the universe were a distraction.
Suddenly, 20% meant half-assed. Google Labs was shut down. App Engine fees were raised. APIs that had been free for years were deprecated or provided for a fee. As the trappings of entrepreneurship were dismantled, derisive talk of the “old Google” and its feeble attempts at competing with Facebook surfaced to justify a “new Google” that promised “more wood behind fewer arrows.”
The days of old Google hiring smart people and empowering them to invent the future was gone. The new Google knew beyond doubt what the future should look like. Employees had gotten it wrong and corporate intervention would set it right again.
Officially, Google declared that “sharing is broken on the web” and nothing but the full force of our collective minds around Google+ could fix it. You have to admire a company willing to sacrifice sacred cows and rally its talent behind a threat to its business. Had Google been right, the effort would have been heroic and clearly many of us wanted to be part of that outcome. I bought into it. I worked on Google+ as a development director and shipped a bunch of code. But the world never changed; sharing never changed. It’s arguable that we made Facebook better, but all I had to show for it was higher review scores.
As it turned out, sharing was not broken. Sharing was working fine and dandy, Google just wasn’t part of it. People were sharing all around us and seemed quite happy. A user exodus from Facebook never materialized. I couldn’t even get my own teenage daughter to look at Google+ twice, “social isn’t a product,” she told me after I gave her a demo, “social is people and the people are on Facebook.” Google was the rich kid who, after having discovered he wasn’t invited to the party, built his own party in retaliation. The fact that no one came to Google’s party became the elephant in the room.
A teenager has been arrested for allegedly making comments on Facebook about the deaths of six British soldiers in Afghanistan last week.
According to Sky News, Azhar Ahmed of Ravensthorpe (19) posted comments on his profile page, criticizing the level of attention British soldiers who died in a bomb blast received, compared to that received by Afghan civilians killed in the war.
He was arrested on Friday and charged over the weekend.
A West Yorkshire Police spokesman said: "He didn't make his point very well and that is why he has landed himself in bother."
Ahmed has been charged with a racially aggravated public order offence and will appear at Dewsbury Magistrates Court on 20 March 2011.
The soldiers were killed on March 6 in the deadliest single attack on British forces in Afghanistan since 2001 when their Warrior armoured vehicle was blown up by a massive improvised explosive device (IED).
The deaths take the number of UK troops who have died since the Afghanistan campaign began in 2001 to 404.
It seems you have to be careful if you have an opinion on Facebook these days...
Tagging friends after you snap a photo of them and posting it to Facebook is so last week. A new smartphone application allows you to point an iPhone camera at a friend and tag that person before you even hit the shutter button.
Called "Klik," the iPhone app automatically displays your friends' names in real time when they appear in view of your iPhone's camera. After Klik detects a face, it instantly connects to your Facebook account and scans your friends' photos to identify the person in view. It also scans your iPhone for photos you've tagged on your phone.
When a user snaps the photo, the subject is automatically tagged, and the photo can either be stored on the device or uploaded to a social network.
The app was released on March 7 by Internet facial recognition service provider Face.com. The Israeli startup improved upon its "Photo Tagger" software, which finds friends' faces in photos and automatically suggests nametags for them -- a solution that Facebook adopted in its Photo Tag Suggest feature in late 2010.
With Klik, facial recognition can now be done in real time...
If you use LinkedIn, you've probably told the site where you work, what you do and who you work with. That's a gold mine for hackers, who are increasingly savvy in using that kind of public -- but personal -- information for pinpoint attacks.
It's called "spear phishing," and it paid off last year in two especially high-profile security breaches: a Gmail attack that ensnared several top U.S. government officials and a separate attack on RSA, whose SecurID authentication tokens are used by millions.
In both cases, the attackers successfully tricked their targets into opening e-mail attachments that appeared to come from trusted sources or colleagues.
Investigators haven't disclosed how the attackers gathered information on their victims, but at RSA's security conference last month, the risks of social networking sites -- and LinkedIn in particular -- were a hot topic. Dozens of presenters said the business networking site could be a potent weapon in the hacker toolkit.
"Businesspeople are using LinkedIn for research purposes, and headhunters and marketers use it to recruit. Why wouldn't Chinese intelligence agents use it as well to spear phish?" said security analyst Ira Winkler, the author of "Spies Among Us."
Most of the discussion about LinkedIn's risks was theoretical -- investigators say it's almost impossible to trace back the original source of personal data used in successful "social engineering" attacks.
But in one arresting case study, self-described "hacker for hire" Ryan O'Horo demonstrated how he used LinkedIn to get inside a client's corporate network.
O'Horo is a managing security consultant for IOActive, a services firm that offers vulnerability testing. His customer, a "high-profile company with tens of thousands of employees," had top-notch technical protections.
"We needed to go to the next level," O'Horo said of his efforts to crack its network.
O'Horo created a fake account on LinkedIn, posing as a company employee. He stocked the profile with realistic details -- a plausible job history and skill set -- plus a few credibility-establishing flourishes like a membership in a local hockey league. From his dummy account, O'Horo sent out 300 connection requests to current company employees. Sixty-six were accepted.
Next, O'Horo requested access to a private LinkedIn discussion forum the company's employees had created. The group's moderators granted his request without ever checking a company directory to confirm his identity.
"Now I had an audience of 1,000 company employees," O'Horo said. "I posted a link to the group wall that purported to be a beta test sign-up page for a new project. In two days, I got 87 hits -- 40% from inside the corporate network."
O'Horo got caught just three days into his LinkedIn attack: An astute employee figured out he didn't belong and blew the whistle. But he'd already made his point...
A Minnesota middle school student, with the backing of the American Civil Liberties Union, is suing her school district over a search of her Facebook and e-mail accounts by school employees.
The 12-year-old sixth grade student, identified in court documents only as R.S., was on two occasions punished for statements she made on her Facebook account, and was also pressured to divulge her password to school officials, the complaint states.
"R.S. was intimidated, frightened, humiliated and sobbing while she was detained in the small school room" as she watched a counselor, a deputy, and another school employee pore over her private communications.
The lawsuit claims that her First Amendment rights were violated by employees at Minnewaska Area Middle School, in west-central Minnesota, as well as her Fourth Amendment rights regarding unreasonable search and seizure.
The Minnewaska School District denies any wrongdoing.
"The district did not violate R.S.'s civil rights, and disputes the one-sided version of events set forth in the complaint written by the ACLU," according to a district statement.
According to the complaint, R.S. felt that one of the school's adult hall monitors was picking on her, so she wrote on her Facebook "wall" that she hated that person because she was mean.
The message was not posted from school property or using any school equipment or connections, the lawsuit states.
Somehow, the school principal got a hold of a screenshot of the message, and punished R.S. with detention and made her apologize to the hall monitor, the complaint says.
She was in trouble again shortly thereafter for another Facebook post, which asked who turned her in, using an expletive for effect, the lawsuit says. She was given in school suspension and missed a class ski trip.
In the third incident, according to the complaint, R.S. was called in by school officials after the guardian of another student complained that R.S. had had a conversation about sex on Facebook.
The girl was called to a meeting with a deputy sheriff, school counselor and an unidentified school employee, the court documents states.
There, she was "intimidated" into giving up her login and passwords to her Facebook and e-mail accounts, the lawsuit says.
"R.S. was extremely nervous and being called out of class and being interrogated," the lawsuit says.
The officials did not have permission from R.S.'s mother to view her private communications, and they gave the girl a hard time about some of the material they discovered, the lawsuit states.
"Students do not shed their First Amendment rights at the school house gate," Charles Samuelson, executive director for the ACLU in Minnesota, said in a statement. "The Supreme Court ruled on that in the 1970s, yet schools like Minnewaska seem to have no regard for the standard."
On Tuesday March 6, the French National Assembly (Assemblée Nationale) passed a law proposing the creation of a new biometric ID card for French citizens with the justification of combating “identity fraud”. More than 45 million individuals in France will have their fingerprints and digitized faces stored in what would be the largest biometric database in the country. The bill was immediately met with negative reactions. Yesterday more than 200 members of the French Parliament referred it to the Conseil constitutional, challenging its compatibility with Europeans' fundamental rights framework, including the right to privacy and the presumption of innocence. The Conseil will consider whether the law is contrary to the French Constitution.1
The new law compels the creation of a biometric ID card that includes a compulsory chip containing various pieces of personal information, including fingerprints, a photograph, home address, height, and eye color. Newly issued passports will also contain the biometric chip. The information on the biometric chip will be stored in a central database. A second, optional chip will be implemented for online authentication and electronic signatures, which will be used for e-government services and e-commerce.
François Pillet, a French senator, called the initiative a time bomb for civil liberties, warning that those interested in protecting civil liberties must stop the creation of a database that could be transformed into a dangerous, draconian tool.2 EFF couldn’t agree more. Last year, Privacy International, EFF, and 80 other civil liberties organizations asked the Council of Europe to study whether biometrics policies respect the fundamental rights of every European. Governments are increasingly demanding storage of their citizens’ biometric data on chips embedded into identity cards or passports, and centrally kept on government databases, all with little regard to citizens’ civil liberties. France’s National Commission on IT and Freedoms (CNIL) also published a report criticizing the creation of the centralized biometric database.
France does not have a good track record of initiatives involving biometric identification. In 2009, it introduced biometric passports—which proved to be a disaster. Last year, the French Minister of the Interior admitted that 10 percent of biometric passports in circulation were fraudulently obtained. It is therefore ironic that the justification for the biometrics bill was that it is needed to combat identity fraud...
A study team at the University of Buffalo, State University of New York, is working on video analysis software to analyze eye movements to spot liars. So far, they say their results show that their software can spot liars with a promising level of accuracy. Their claim is based on their study using 40 people. Their system correctly identified who was telling the truth and who was lying 82.5 percent of the time.
“What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes,” Ifeoma Nwogu, a co-author of the study and professor at the Center for Unified Biometrics and Sensors, told the UB Reporter.
According to a report in Scientific American, their work was inspired by findings from a professor of psychology at the University of California in San Francisco, School of Medicine, Paul Ekman. He has focused on emotions as they relate to facial expressions.
As for interrogators themselves, their experiences indicate that the use of such software for telling who is lying and who is telling the truth would not be practical in all instances and may not always lead them to the right targets. Just as polygraphs have drawn controversy over how reliable they really are, face-detection tools might also generate its share of false positives.
Undaunted, the researchers last year presented their study results at the 2011 IEEE International Conference on Automatic Face and Gesture Recognition and now they are set to broaden their investigations to account for body language too...
The next time you're pulled over for a traffic violation, you could be asked for more than license and registration. You may find yourself peering into an iPhone that scans your iris, records your facial features and takes your fingerprints.
Some police departments plan to start using Moris iPhone scanners this spring. These boxlike, portable biometric scanning devices attach to Apple (AAPL) iPhones, and the two products work together. The Moris is equipped with iris, fingerprint and face recognition technology. It can connect wirelessly to a database of biometric scans that can bring up records of past offenses and confirm that those stopped are who they say they are.
The Moris (Mobile Offender Recognition and Identification System) device is the product of Plymouth, Mass.-based BI2 Technologies. Founded in 2005, the privately held company makes software and hardware biometric technology products used in some areas, including identifying and locating missing children and seniors...
"You know, I had this thing where, you know, hey, I don't want that put on my cab, you know, people in the neighborhoods calling you snitch," Johnson says. "I had to get past that point. My heart is good, and I look for good and right in life."
He has recently reported two incidents — one involving a father and son who got into his cab in the middle of the day.
"The father was like 75 years old, couldn't hardly walk," Johnson remembers. "And when he opened up the garage to this condo, all I seen was nothing but Bud Light cases, and empty cases of hard alcohol and pizza boxes, and both of these guys came out and couldn't hardly walk. They was very smelly."
Johnson won an award for notifying police that the men's health and safety were at risk. Since Taxis on Patrol began, more than a thousand calls have come in, ranging from serious crimes to humanitarian concerns.
"I think it'll make communities safer as a result of this 'Neighborhood Watch on Wheels,' if you will," Denver Police Cmdr. Tony Lopez says. He says in this era of tight city budgets, partnering with the private sector to keep the streets safer makes sense.
"Actually, it's serving as a force multiplier for us in the delivery of services and public safety," Lopez says.
And because of that, it's hard to find somebody who doesn't speak highly of the program, except maybe criminals. Lopez says he would like to see the program expand to UPS drivers and truckers...
Make your millions with MONOPOLY Electronic Banking! Keep your finances at your fingertips with MONOPOLY Electronic Banking and 6 cool bank cards! So it’s your birthday? Collect gifts from your generous opponents with the swipe of a card! Pick up your hard-earned salary at the touch of a button! Time to pay rent? Get it paid quickly with the fun, fast Banking Unit! Having your own personal bank card keeps play fast and lets you check your cash in an instant. Just like real life…but much more fun!
The opportunities are rich in MONOPOLY Electronic Banking! Track your cash electronically as you buy properties, pay bills and collect gifts and debts from your opponents. You might be rolling in the dough, but don’t forget to strategize if you want to win big!
Comes with Gameboard, Electronic Banking Unit, Title Deed cards, Chance and Community Chest cards, 6 bank cards, 2 dice, 6 tokens, 32 houses, 12 hotels and instructions.
Requires 2 AAA batteries, not included.
For 2 to 6 players.
Ages 8 and up.
A fun way to introduce your kids to the new cashless society!
Gerald Zirnstein grinds his own hamburger these days. Why? Because this former United States Department of Agriculture scientist and, now, whistleblower, knows that 70 percent of the ground beef we buy at the supermarket contains something he calls “pink slime.”
“Pink slime” is beef trimmings. Once only used in dog food and cooking oil, the trimmings are now sprayed with ammonia so they are safe to eat and added to most ground beef as a cheaper filler.
It was Zirnstein who, in an USDA memo, first coined the term “pink slime” and is now coming forward to say he won’t buy it.
“It’s economic fraud,” he told ABC News. “It’s not fresh ground beef. … It’s a cheap substitute being added in.”
Zirnstein and his fellow USDA scientist, Carl Custer, both warned against using what the industry calls “lean finely textured beef,” widely known now as “pink slime,” but their government bosses overruled them.
According to Custer, the product is not really beef, but “a salvage product … fat that had been heated at a low temperature and the excess fat spun out.”
The “pink slime” is made by gathering waste trimmings, simmering them at low heat so the fat separates easily from the muscle, and spinning the trimmings using a centrifuge to complete the separation. Next, the mixture is sent through pipes where it is sprayed with ammonia gas to kill bacteria. The process is completed by packaging the meat into bricks. Then, it is frozen and shipped to grocery stores and meat packers, where it is added to most ground beef.
The “pink slime” does not have to appear on the label because, over objections of its own scientists, USDA officials with links to the beef industry labeled it meat.
“The under secretary said, ‘it’s pink, therefore it’s meat,’” Custer told ABC News.
Concerns about the safety of Facebook profiles are valid, especially as the company grows and people share more information on the site. Facebook has had frightening breaches of user trust in the past, and some questions about where its loyalties lie—with consumers or with corporations—remain unanswered. Nobody can predict whether Facebook will end up taking advantage of the information provided by the millions of people who log into it every hour. But while Facebook itself waffles between creepy and benevolent, it turns out some people are using the site to get downright evil when it comes to online privacy.
An in-depth report from MSNBC reveals numerous documented instances of American colleges and employers demanding that students, employees, and applicants open up their Facebook profiles for review. Tecca.com reported last year on a police department in North Carolina that asked people applying for a clerical job, "Do you have any web page accounts such as Facebook, Myspace, etc.? If so, list your username and password." The Maryland Department of Corrections also asked applicants to hand over their passwords, until an ACLU complaint killed that practice. Still, some applicants report being asked in interviews to log into their Facebook profiles and allow the interviewer to look over their shoulder while they click around their photos and wall posts.
It doesn't end with the job market. College students—athletes in particular—are also subject to this invasive line of inquiry. In the new player handbook for athletes at the University of North Carolina, a passage reads, "Each team must identify at least one coach or administrator who is responsible for having access to and regularly monitoring the content of team members' social networking sites and postings. The athletics department also reserves the right to have other staff members monitor athletes' posts." Elsewhere, students have been told they have to friend their coaches, thus giving the coaches total access to their accounts.
To be sure, there are ways to lock down your Facebook account, even from "friends," but should anyone be forced to to resort to such lengths?
Trust me, gym rat. Your outrageously badass treadmill workout has nothing on this.
The Pentagon’s far-out research agency, Darpa, has just released a new video of its Cheetah ‘bot — designed to mimic the rapid movements of cheetahs, the speediest animals in nature — absolutely killing it on a laboratory treadmill.
In fact, the ‘bot is running so fast (reaching 18 miles an hour at its peak) that Cheetah actually set a new land speed record, Darpa boasts, for robotic running. The previous record, set in 1989, was a measly 13.1 miles per hour. For comparison’s sake, remember that both of those speeds are much, much faster than any average human jogger. Robo-Cheetah even comes pretty close to trouncing the human world-record holder, Usain Bolt, who clocked an amazing 28 mph during the 100-meter sprint in 2009.
It was after the robotic hummingbird flew around the auditorium -- and after a speaker talked about the hypersonic plane that could fly from New York to the West Coast in 11 minutes -- that things got really edgy.
Vijay Kumar, an engineering professor at the University of Pennsylvania, showed the more than 1,300 attendees at last week's TED conference several videos in which fleets of tiny flying robots performed a series of intricate manuevers, working together on tasks without colliding or interfering with each others' flightworthiness.
It seemed that, at least for some in the audience, a bridge had been crossed into a new era of technology, one that could change the way we think about robots and their application to such fields as construction, shipping and responding to emergencies.
Kumar's devices (he calls them "Autonomous Agile Aerial Robots") cooperated on building simple structures and showed they were capable of entering a building for the first time and quickly constructing a map that would allow for assessment and response to a structural collapse or fire.
He held up one robot, designed by his students Daniel Mellinger and Alex Kushleyev, which weighs a little more than a tenth of a pound and is about 8 inches in diameter. The device has four rotors; when they spin at the same speed, the robot hovers. If you increase the speed, Kumar explained, the robot flies up. Spinning one rotor faster than the one opposite it causes the robot to tilt. It also can flip over multiple times without losing its ability to fly and can recover its stability when thrown into the air.
The robots are capable of learning trajectories and manueuvers that can enable them to literally fly through hoops -- and other confined spaces.
When the robots are formed into a flotilla, they calculate (a hundred times a second) and maintain a safe distance between them. He showed a video of 20 robots flying in a variety of formations -- and moving through obstacles -- inches from each other without interfering with the stability of their neighbors.
To cap his presentation, he showed a video, created by his students in three days, of nine flying robots playing the James Bond theme on musical instruments...
A Frenchman claiming that a Google Maps' Street View picture of him peeing in his front yard has made him a "laughingstock" is suing the tech giant for 10,000 euros.
"Everyone has the right to a degree of secrecy," his lawyer, Jean-Noel Bouillard, told Reuters. "In this particular case, it's more amusing than serious. But if he'd been caught kissing a woman other than his wife, he would have had the same issue."
The man, who was not identified but referred to as in his 50s and living in a village of 3,000 in the Maine-et-Loire region, "He discovered the existence of this photo after noticing that he had become an object of ridicule," Bouillaud told AFP, asking that the name of the village not be published.
The Street View photo has the man's face blurred out, but villagers figured out who the man was immediately, according to Bouillaud. Bouillaud did not explain why his client was peeing in his front yard instead of his home, only that the man was on his own property with the gate closed. The man is suing Google for 10,000 euros (around $13,000) in damages.
Google's Street View has a collection of unusual photos and peeing isn't the half of it -- apparently it captures lots of prostitutes, drunkenness and public nudity, as well as a some very beautiful landscapes.
Denver officials have ordered new training for police detectives and are considering policy changes in the wake of disclosures that the initial eye-witness descriptions of crime suspects may be overwritten in some instances and never make it into court files.
And while the Police Department's computer system keeps a log of the edits officers make to those descriptions, attorneys in the public defender's and district attorney's offices cannot recall ever seeing one — raising the specter that potentially critical information may be inadvertently withheld from defense attorneys.
Lt. Matt Murray, a spokesman for the Denver Police Department, said detectives are trained to record changes they make in sections of their reports that cannot be revised. But he also acknowledged that it's hard to know how many detectives have not followed that protocol.
The Denver Post reported last month that the computer software officers use to write reports includes a section that is updated as new information about suspects is uncovered. As a result, a crime victim's description of an attacker could be lost if it's not noted elsewhere.
The Post's report was the first time many in the legal community had heard about the issue.
"It's a major, major, major problem," said Dan Schoen, executive director of the Colorado Criminal Defense Bar. "I don't know that it was malicious or intentional. I like to believe the best about people. But that doesn't mean it's not a major problem."