Ford wants you to relax.
The automaker is developing technology that can predict a workload and autonomously adjust a car’s systems to reduce distractions during dangerous situations.
Using biometric sensors mounted in the steering wheel, seatbelt and the seat itself, the system will be able to monitor a driver’s heart rate, temperature and breathing rate and feed that data into a computer algorithm that can draw a picture of how much stress the person is facing at any given moment.
The system then combines that information with data collected from cameras and radars located around the car about current traffic conditions, along with speed, steering and throttle inputs to determine if a driver is about face a scenario that requires concentration, like merging onto a busy highway during rush hour.
Working with the biometric feedback, the computer can then change the way the car’s warning systems respond to give the overloaded driver earlier notice of potential dangers and even turn down the radio or enable a “do not disturb” function on the in-car phone system to help them focus on the task at hand.
Aside from the biometric sensors, much of the technology required to make the system work is already found on Ford’s production cars, including the semi-self-parking Focus, which already eases the pain of the most stressful things about driving a car.
FORD: Force Or Remand Daily
Imagine if Microsoft pushed an update to your computer that started recording everything you write in Word and shipped it back to Redmond. Or if the phone on your desk was upgraded to send details of your calls not only to the telecommunications provider, but also to the handset maker.
That’s what some users feared was happening to their Internet histories when Cisco Systems pushed out an update for new Linksys Smart Wi-Fi routers this week. The update moved the routers to a new cloud-based management system called Cisco Connect Cloud — and came with an ominous warning in the terms of service that the routers would record users’ Internet histories as part of the arrangement.
When you use the Service, we may keep track of certain information related to your use of the Service, including but not limited to the status and health of your network and networked products; which apps relating to the Service you are using; which features you are using within the Service infrastructure; network traffic (e.g., megabytes per hour); Internet history; how frequently you encounter errors on the Service system and other related information (“Other Information”). We use this Other Information to help us quickly and efficiently respond to inquiries and requests, and to enhance or administer our overall Service for our customers.
Some technologists and privacy advocates were unnerved, complaining Friday on Internet forums such as Slashdot and Reddit that Cisco’s move was a breach of privacy. Some said they would dump Linksys over the issue. Home routers generally don’t do much of anything except act as passive intermediaries, handing off traffic to Internet service providers without keeping a log.
“The language in the terms of service is really worrying,” said Aaron Brauer-Rieke, a fellow at the Center for Democracy & Technology, an online-rights group, in an interview. “That’s an incredible swath of information, when you think about the fact that your router is the first thing your computer talks to before the ISP to go to the Internet. That could be every Internet address you access when you’re online.”
A Cisco executive said the company made a mistake in its terms of service and is changing the language on the website.
In a statement, Brett Wingo, general manager of Cisco’s home networking division, said:
“We are absolutely not tracking Internet history, nor do we intend to. We recognize that some of the words in the statement are unclear. We are taking immediate steps to modify this language to be more specific about our commitment to protect the personal privacy of our customers...”
Despite concerns about U.S.-made drones ending up in enemy hands, American military contractors are lobbying the government to loosen export restrictions and open up foreign markets to the unmanned aircraft that have reshaped modern warfare.
Companies such as Northrop Grumman Corp. and other arms makers are eager to tap a growing foreign appetite for high-tech — and relatively cheap — drones, already being sold on the world market by countries such as Israel and China.
"Export restrictions are hurting this industry in America without making us any safer," Wesley G. Bush, Northrop's chief executive, said at a defense conference this year. "The U.S. is struggling to sell unmanned aircraft to our allies while other nations prepare to jump into the marketplace with both feet."
The defense industry may want to sell more drones overseas, but arms control advocates are alarmed. The potential for these weapons to fall into enemy hands is great, they say, and easing restrictions could result in remote-controlled killing machines being used in some of the most volatile regions of the world.
Daryl Kimball, executive director of the Arms Control Assn., said that drone sales are problematic because the unmanned vehicles are more affordable than other military aircraft. And with no human pilot at risk, drones could make it easier to decide to go to war, he said.
"The proliferation of this technology will mark a major shift in the way wars are waged," he said. "We're talking about very sophisticated war machines here. We need to be very careful about who gets this technology. It could come back to hurt us."
As the U.S. war effort draws down and the Pentagon budget shrinks, defense companies say they need Congress to ease restrictions so they can tap lucrative foreign markets for their wares.
More important, they say, the current export restrictions may cause the U.S. to lose potential customers to nations eager to elbow their way into the market. Already, Israel is making drones and selling them to several countries, including Azerbaijan, India and Ecuador. China has more than a dozen drones in development.
Genie's out the bottle, beatch!!! If we don't sell 'em, somebody else will! Kill 'em all - let God sort 'em out! Yee-Haw!!!
On Friday’s TRMS, Rachel Maddow spoke to NBC News Islamabad Bureau Chief Amna Nawaz about a growing controversy surrounding the Obama administration’s counter-terrorism program – its drone strikes in Pakistan.
“For nine years the US has been killing people using remote-piloted aircraft in the nuclear-armed, fairly unstable, rabidly anti-American nation of Pakistan. President Bush started this policy but President Obama has tripled down on it. The Obama administration did finally admit to the fact that we are doing this in year nine of the policy, just last month,” said Maddow.
The US has staged 300 strikes in Pakistan since 2004, and questions surrounding the casualty tally are racking up. Anger against the US is growing across Pakistan, with anti-drone rallies also gaining momentum. Maddow shared never-before-seen footage of damage caused by a US drone strike and asked Nawaz about her recent interview with anti-drone Pakistani lawyer Shahzad Akbar earlier this week.
Akbar represents the families of civilian victims of drone strikes in Islamabad. He explained to Nawaz:
“The problem is that no one cares if ‘nobody’ is killed, and by ‘nobody,’ I mean a person who is nobody. A person who is probably just living in that area, has no money, no education, no representation,” he said. “The point here is that if we are successful in killing one or two people who we really want to kill, in order to do that we kill 40 people – who cares? And this is a sad kind of attitude we have from the American government and unfortunately from my own government..."
(related: 'mystery' drone strike in Mali)
(related: drone strike in the Philippines)
It's not right to kill innocent children for any reason. Some people may even call that 'Terrorism'.
American researchers took control of a flying drone by hacking into its GPS system - acting on a $1,000 (£640) dare from the US Department of Homeland Security (DHS).
A University of Texas at Austin team used "spoofing" - a technique where the drone mistakes the signal from hackers for the one sent from GPS satellites.
The same method may have been used to bring down a US drone in Iran in 2011.
Analysts say that the demo shows the potential danger of using drones.
Drones are unmanned aircraft, often controlled from a hub located thousands of kilometres away.
They are mostly used by the military in conflict zones such as Afghanistan...
A Defense Advanced Research Projects Agency (DARPA) contest to develop a small spy drone capable of performing a series of maneuvers, including landing briefly to capture surveillance video, has concluded without a winner. DARPA says the $100,000 prize will not be awarded.
DARPA launched the competition to create a portable unmanned aerial vehicle (UAV) for intelligence gathering a year ago. The goal was to develop a "military-relevant, backpack-portable UAV" capable of vertical take-off, flying out of sight, landing, capturing video, and returning. More than 140 teams entered the contest, called UAVForge, but none of them successfully completed the required maneuvers.
"While some teams were able to reach the observation area, none were able to land on a structure and complete the mission," DARPA said in a statement announcing the results.
DARPA established a website, UAVForge.net, to encourage and support crowd-sourcing of ideas during the competition. The entries were narrowed to nine teams of finalists, which participated in a "fly off" at Fort Stewart in Georgia. The course required the UAV to fly below 1,000 feet, maneuver around obstacles, land on or hover above a physical structure, and visually track moving objects in real time...
Fighter jets thunder above the English countryside. Missiles stand ready. And Big Brother is watching like never before.
The London Olympics are no ordinary games. Not since World War II have Britain and the United States teamed up for such a massive security operation on British soil.
Hundreds of American intelligence, security and law enforcement officials are flying across the Atlantic for the games that begin July 27. Some will even be embedded with their British counterparts, sharing critical intelligence and troubleshooting potential risks. Dozens of Interpol officers will also be deployed.
The unique collaboration is rooted in common threats the partners have faced since the Sept. 11 terror attacks on the U.S. and Britain's own deadly suicide bombings in 2005.
Britain was America's closest ally in Afghanistan and Iraq, making it a prime target of Islamic terror groups. And dozens of recent terror plots, including the 2006 plot to blow up nearly a dozen trans-Atlantic airliners, have been hatched within Britain's sizeable Muslim population, more than 1 million of whom have ties to Pakistan.
Although other Olympics have taken place since 9/11 -- Salt Lake City, Athens, Turin, Beijing and Vancouver -- London poses a different breed of security challenge.
"I'm confident that there is more than adequate security here for these games," Louis Susman, the U.S. Ambassador to the U.K., told The Associated Press. "That said, we live in a tumultuous world, whether that be in New York or London."
Intelligence officials say there has been an expected increase in chatter among extremist groups but there are still no specific or credible threats to the London games. The terror level is labeled substantial, a notch below severe and what it has been for much of the past decade. A substantial threat level indicates that an attack is a strong possibility.
"There is a perception in some quarters that the terrorist threat to this country has evaporated," said Jonathan Evans, head of Britain's domestic spy agency of MI5. "Bin Laden is dead, al-Qaida's senior leadership in Pakistan is under serious pressure and there hasn't been a major terror attack here for seven years. (But) in back rooms and in cars and on the streets of this country, there is no shortage of individuals talking about wanting to mount terrorist attacks here..."
The FBI plans to test by 2014 a database for searching iris scans nationwide to more quickly track criminals, according to budget documents and a contractor working on the project.
The Next-Generation Identification system, a multiyear $1 billion program already under way, is expanding the server capacity of the FBI’s old fingerprint database to allow for rapid matching of additional physical identifiers, including facial images and palm prints.
Today, iris scans conjure images of covert agents accessing high-security banks and laboratories. But, increasingly, law enforcement agencies are spending state and federal funds on iris recognition technology at jails to monitor inmates. Some Missouri prisons are buying the same system the FBI acquired, partly so that they can eventually exchange iris images with federal law enforcement officials. And many counties are storing pictures of prisoner irises in a nationwide database managed by a private company, BI2 Technologies.
The FBI expects to collect many of these state and local iris images, according to B12 officials and federal documents.
A May 17 budget justification document states one of the “planned accomplishments for BY13” -- the budget year that begins Oct. 1 -- is to “demonstrate iris recognition capabilities via the iris pilot.”
A June FBI advisory board memo that Nextgov reviewed states, “supervised release/corrections are candidates for the pilot, being that many already have the capability in place. The additional goal is to start to build an iris repository.” Iris recognition is a helpful identification tool, according to the memo, because it “is very accurate,” does not require human intervention and “the hardware footprint is also very small [due] to the size of the iris image.”
The aim of iris recognition at corrections facilities, according to law enforcement officials, is to promptly catch repeat offenders and suspects who try to hide their identities...
Just what kind of information can the government get with a so-called “national security letter” – the tool that allows investigators to seek financial, phone and Internet data without a judge’s approval?
It’s a secret.
The letters let the Federal Bureau of Investigation get information without going before a judge or grand jury if it’s relevant to a national security investigation. The letters have been around since the 1980s, but their use grew after the Sept. 11, 2001 terrorist attacks and passage of the USA Patriot Act. Tens of thousands of the requests are sent each year, but they are generally subject to strict secrecy orders.
In response to a Freedom of Information Act request by the American Civil Liberties Union, the Justice Department has revealed for the first time templates for each of the types of national security letters it sends – nine in all. Among other things, the letters show that the FBI is now informing people who receive the letters how they can challenge the documents in court.
But some key elements of the letters remain blocked from view – including lists of material the FBI says companies can send in response to the letter.
The most basic requests outlined in the templates are for name, address and length of service for either phone or Internet accounts. The broadest requests seek things such as entire credit reports, Internet activity logs, phone “billing records,” “financial records” or “electronic communications transactional records.”
What exactly do those terms mean? Well, there’s the rub.
A 2008 opinion from the Justice Department’s legal counsel found that the letters could request “only those categories of information parallel to subscriber information and toll billing records for ordinary telephone service.” What exactly counts as “parallel” could be debated.
In several of the templates, the FBI includes a list of specific items that “may be considered” by the companies to be responsive to the requests. The list for phone billing records includes 15 bullet points; there are 13 points on the list for electronic data. The items associated with financial records appear to stretch on for two pages. But we can’t know for sure what is there because it has been redacted.
Some broad outlines are available: Financial records include “any record held by a financial institution pertaining to a customer’s relationship with the financial institution.”
Electronic records involve “transaction/activity logs” and email “header information,” which includes things such as the “to” and “from” lines of a message.
The letters point out that companies aren’t supposed to tell investigators about the content of their customers’ messages; courts have long held that phone conversations and the texts of recent emails are available only with search warrants. The template to get electronic records specifically warns companies not to provide the subject lines of emails for this reason.
Beyond that, it’s unclear...
It is not surprising that ‘terrorism’ has become the bogeyman, in whose name, every last lunacy of the government can be justified; including committing acts that would otherwise be illegal and anathema to any civilised society. Using predator drones, equipped with hellfire missiles to summarily execute people based on ‘suspicious’ activities, without any due process of law, and even at the cost of substantial civilian deaths, all seem to be forgiven.
Ignoring for a moment the bogus methodology of counting all military aged males in a drone strike zone as combatants, the focus on civilian casualties of drone attacks seems to miss the bigger question. This was neatly summarised by Dr Paul Craig Roberts, former assistant secretary of the treasury;
It has never been revealed how a single citizen, or any number thereof, could possibly comprise a threat to a government that has a trillion plus dollars to spend each year on security and weapons, the world’s largest navy and air force, 700 plus military bases across the world, large numbers of nuclear weapons, 16 intelligence agencies plus the intelligence agencies of its NATO puppet states and the intelligence service of Israel.
It has also never been adequately explained why the most powerful military superpower in history, which overcame the mighty Wehrmacht, crushed the Imperial Japanese Army, and stared down a nuclear armed Soviet Union in an existential contest of Mutually Assured Destruction (MAD), now considers it necessary to carry out state sponsored assassinations, based on mere suspicion, of individuals who have none of the resources, technological sophistication, or military prowess of its erstwhile enemies.
The reality of ‘insurgent math’ and addressing root causes:
Notwithstanding the political instability caused by drone strikes, proponents tend to erroneously conflate short-term tactical victory with long-term strategic success. This fact is evident in the concept of ‘insurgent math’, coined by none other than the former commander of the US-led coalition forces in Afghanistan, retired General Stanley McChrystal, which rightly holds that,
"For every innocent person you kill, you create ten new enemies."
In this respect, note that Faisal Shahzad, the so-called Times Square bomber, cited the indiscriminate killings caused by drone attacks to rationalise his terrorist act. As a matter of fact, one of the most important observations made by the declassified 2006 US National Intelligence Estimate report is that,
"The Iraq conflict has become the ‘cause celebre’ for jihadists, breeding a deep resentment of US involvement in the Muslim world and cultivating supporters for the global jihadist movement."
Resentment to indiscriminate civilian deaths caused by illegal invasions and drone strikes fuels more terrorism, which directly undermines the end goal of achieving regional peace and stability. It is occupation and invasion that breeds terrorism, not vice versa. Thus, drone warfare is wholly counterproductive, in that it does nothing to combat the root causes of militant extremism...
While most robots have been built recently with the work-industry in mind, one robot was built with the intention to party.
Georgia Tech’s Center for Music Technology have developed Shimi, a musical robot designed to DJ dance parties everywhere.
The smartphone-enabled robot is considered an interactive “musical buddy” that recommends songs based on feedback from the listeners.
“Shimi is designed to change the way that people enjoy and think about their music,” Professor Gil Weinberg, director of Georgia Tech’s Center for Music Technology and the robot’s creator, said in a recent statement.
The robot works in accordance with a smartphone app, which can be programmed to control how it gains the sensing and music generating capabilities.
Shimi can use a smartphone’s camera and face-detecting software to follow a listener around the room and position its speakers towards them for optimal sound.
The robot can also use recognition software to sense if someone claps or taps a tempo to play a song that best matches the suggestion.
“Many people think that robots are limited by their programming instructions,” Music Technology Ph.D. candidate Mason Bretan said. “Shimi shows us that robots can be creative and interactive.”
The researchers are planning to open up Shimi’s capabilities to recognize when a person dislikes a song, or wants to skip a song. They said they are planning to create future apps that will allow the user to shake their head in disagreement, or wave a hand to tell Shimi to skip a song or decrease the volume.
Developers will be able to open up Shimi to new capabilities by creating their own apps for the robot.
“I believe that our center is ahead of a revolution that will see more robots in homes, bypassing some of the fears some people have about machines doing everyday functions in their lives,” Weinberg said...
Contrary to the claims of the UIDAI, fingerprints will be a highly inappropriate tool to uniquely identify individuals. Given multiple errors during enrolment and the potentially high error rates at authentication, the use of fingerprint authentication is likely to foster a regime of misidentification and exclusion. Worse still, exclusion will be most acute among poor manual labourers. The poor record of fingerprint readers in U.K. airports and frequent fingerprint mismatches in the U.S. were also a result of fallibilities in the fingerprint technology.
The elderly is another group that would be massively excluded. As the PoC reports admit, those above 60 years had the “highest rejection rates” at authentication. Yet, the Mid-Term Review of the Eleventh Plan by the Planning Commission has recommended the use of Aadhaar fingerprints to pay pension to the elderly through the National Social Assistance Programme (NSAP). The recommendation is to use “banking correspondents”, who would carry handheld fingerprint devices, to make “payments at the doorstep”. A sure recipe for exclusion, it would appear.
In fact, the only group that appears certain to gain from the Aadhaar project is the global biometric industry. As Nandan Nilekani suggested in an interview, “the Unique Identification Project is creating new opportunities for biometric technology…. Our success can, therefore, determine the course the industry will take, since these technologies will be tested in India on an unprecedented scale.” On the other hand, the losses are likely to be felt mostly by the poor. It would be an irony that a project that is marketed in the name of “including the poor” would end up excluding them massively from whatever meager provisions they obtain from the state today...
A new website is taking embarrassing and potentially incriminating status updates from Facebook users who don't use the site's privacy settings and posting them for the whole world to see.
They know what you're doing, and you're not going to like how they found out.
A new website is taking embarrassing and potentially incriminating status updates from Facebook users who don't use the site's privacy settings and posting them for the whole world to see.
The WeKnowWhatYoureDoing.com “social experiment” is a nightmare come true for people who haven’t figured out how to make their profile private.
"I hate my boss so mch [sic], his so arrogant bloody a--," says “ltumeleleng S,” in a status update seen on the site's homepage.
The posts aren't just limited to people ranting about their bosses. There are also updates about how hungover people are, who's taking drugs and who's got a new phone number.
"Friday to Wednesday sesh complete hahaha," says a status update from Tommy B, "hungover would be an understatement."
The site pulls in the status updates from Facebook's Graph API, a developer tool, and simply posts them in list format for anyone, including bosses, to see.
Callum Haywood, 18, the site's creator, admits on his site's "About" page that people likely don't want others to see indiescretions on his site, but he blames Facebook users for not properly understanding their privacy options.
Haywood explains that his site is meant to be a learning tool, explaining how users can set their privacy settings to limit who can see their profiles.
A disclaimer on the site warns people who have seen their updates on the site that he "does not accept any responsibility or liability … for any loss or damage of whatever nature … [from] the information on this site."
In other words, if you lose your job, you can’t blame him...
When someone claims they're good at rock-paper-scissors, they're usually just trying to psyche you out so they can predict your next move. But this robot, created by the Ishikawa Oku Laboratory in Japan, doesn't need to psyche you out, because it knows it will beat you every single time.
How can it do it? One thing it certainly doesn't do is any kind of high-level analysis of the game. It doesn't put your last sequence of moves through a complex semantic analysis and try to predict the move. It doesn't use anti-random tactics like "five scissors in a row" to throw you off. All it needs is a high-speed camera and quick reflexes.
Yes, the robot cheats. By watching the image from a camera that can determine the position of your hand every millisecond, it is aware of your move the very moment you make it. And as soon as your hand starts to form that rock, the robot is giving you some paper to wrap it up. At the very end of the video, you can see the tiny delay between the human making a move and the robot reacting — but it happens so fast that you wouldn't notice except when shown in slow-motion...
K-pop Star Robots will emulate the dance moves of South Korea’s famous pop groups
With technology improving at a rapid rate, we’re starting to see that being incorporated into live shows and tours more than ever. This includes holographic projections of artists who have passed (i.e. Tupac and Elvis), along with LED lights shaped that the bodies of dancers, so we guess dancing robots did not exactly come as a surprise. In South Korea, Dongbu Group has announced that they will be release robots that look and dance like the country’s famous K-pop (Korean pop) idols, such as Super Junior and Girls’ Generation.
The project dubbed “K-pop Star Robots” is a joint effort by Dongbu Robots, Intelligent Recreational Robots and Ocean Bridge E&T. These robots will come with a high-powered motor that allows them to bend 20 of their joints, ranging from their neck, their waists, their thighs to their hips which will help to simulate the fluid dance moves from these Korean pop groups. The K-pop Star Robots are expected to see a launch by the end of the year...
How do you feel about getting undressed in front of a robot?
New research suggests humans may be willing to take off their clothes in front of Star Wars robot R2-D2, but undressing in front of the more human-like C-3PO may be asking too much.
Dr Christoph Bartneck says new tests have found we tend to feel reluctant to smash a robot to pieces, or we may refuse to undress in front of one, because we perceive robots as being "somewhat alive".
And the more human characteristics a robot has, the more our hesitation grows.
Dr Bartneck is a computer scientist at the University of Canterbury and has been at the forefront of human-robot interaction research for more than 10 years in New Zealand and overseas.
He said experiments had revealed an interesting relationship between humans and robots.
"Research studies show us people are reluctant to kill robots with perceived intelligence or that are thought to look human," he said. "We're interested in finding out why they do or don't want to take that step."
Dr Bartneck said it was an important theory to explore as more human-like robots, such as automatic vacuum cleaners, are introduced to homes.
The social side of a robot is evolving.
"When robots enter the home ... they become social actors and that also means that they have to know and respect the values and social norms that we have. And only then will they be acceptable in the home," he said.
"If a robot comes around and acts very inappropriately then we probably would not be very happy with having him. That's why it is important to pay attention to the social aspect of robotics, not only the functional ones, in terms of how quickly can you wash the dishes. Of course that's nice (to know how efficient a robot is) but that's not the whole thing - human communication and human interaction is quite difficult."
Studies show people can also get embarrassed around robots.
"Particularly if a robot looks like us and we're asked to do something with it that we probably wouldn't do with a stranger, like take our clothes off for example.
"From these research findings we know people perceive some robots as having intentional behaviour and being somewhat alive. But we've got a fair way to go before we see them as our equals."
The research is helping improve the way robots worldwide are designed and perform...
A small surveillance drone flies over an Austin stadium, diligently following a series of GPS waypoints that have been programmed into its flight computer. By all appearances, the mission is routine.
Suddenly, the drone veers dramatically off course, careering eastward from its intended flight path. A few moments later, it is clear something is seriously wrong as the drone makes a hard right turn, streaking toward the south. Then, as if some phantom has given the drone a self-destruct order, it hurtles toward the ground. Just a few feet from certain catastrophe, a safety pilot with a radio control saves the drone from crashing into the field.
From the sidelines, there are smiles all around over this near-disaster. Professor Todd Humphreys and his team at the University of Texas at Austin's Radionavigation Laboratory have just completed a successful experiment: illuminating a gaping hole in the government’s plan to open US airspace to thousands of drones.
They could be turned into weapons.
“Spoofing a GPS receiver on a UAV is just another way of hijacking a plane,” Humphreys told Fox News.
In other words, with the right equipment, anyone can take control of a GPS-guided drone and make it do anything they want it to.
“Spoofing” is a relatively new concern in the world of GPS navigation. Until now, the main problem has been GPS jammers, readily available over the Internet, which people use to, for example, hide illicit use of a GPS-tracked company van. It’s also believed Iran brought down that U.S. spy drone last December by jamming its GPS, forcing it into an automatic landing mode after it lost its bearings...
Jimmy Carter isn’t too thrilled with the idea of President Barack Obama – who comes to Atlanta today — picking individual winners and losers in the war on terror. From the Monday op-ed by the former president in the New York Times:
"Revelations that top officials are targeting people to be assassinated abroad, including American citizens, are only the most recent, disturbing proof of how far our nation’s violation of human rights has extended. This development began after the terrorist attacks of Sept. 11, 2001, and has been sanctioned and escalated by bipartisan executive and legislative actions, without dissent from the general public. As a result, our country can no longer speak with moral authority on these critical issues….
Recent legislation has made legal the president’s right to detain a person indefinitely on suspicion of affiliation with terrorist organizations or “associated forces,” a broad, vague power that can be abused without meaningful oversight from the courts or Congress (the law is currently being blocked by a federal judge)….
Despite an arbitrary rule that any man killed by drones is declared an enemy terrorist, the death of nearby innocent women and children is accepted as inevitable. After more than 30 airstrikes on civilian homes this year in Afghanistan, President Hamid Karzai has demanded that such attacks end, but the practice continues in areas of Pakistan, Somalia and Yemen that are not in any war zone. We don’t know how many hundreds of innocent civilians have been killed in these attacks, each one approved by the highest authorities in Washington. This would have been unthinkable in previous times…"
The man sounds downright libertarian. But Carter isn’t alone. Three days ago, former U.S. senator Fritz Hollings of South Carolina declared killing by drone to be “un-American.”
The Japanese design studio Takram was asked to design a water bottle for people to use after a hypothetical future environmental disaster. Takram, imagining what a world would be like with rising sea levels and radioactive disasters, thought that we probably wouldn’t be carrying around water bottles. Instead, they designed an entirely new organ system, to be implanted in the body, that would mean we used less water in the first place.
Its solution, called the Hydrolemic System, involves both harvesting more moisture from the air than our current unmodified bodies are capable of, and also doing more to retain the water we have. The company imagines that system would require us to drink 0.1 cups of water a day.
Inserts that go in our noses convert moisture in the air we breathe into water, and other inserts at the ends of our renal and digestive systems keep water from leaving by those routes. A collar on our neck helps prevent perspiration by turning our body heat into electricity, so it doesn’t make us perspire, losing precious liquid...
What are the laws against drones—and their masters—behaving badly? Turns out, there are few that explicitly address a future where people, companies, and police all command tiny aircraft. But many of our anxieties about that future should be assuaged by existing regulations. We asked Ryan Calo, a law professor at the University of Washington, to weigh in on some of the issues.
Can I use a drone to spy on my sexy neighbor?
Ogle at your own risk, but the fact that you’re spying by plane shouldn’t make a difference. Peeping Tom laws say you can’t view a fully or partially nude person without their knowledge, so long as they have a reasonable expectation of privacy. Chances are, if you need a drone to see her, your neighbor is justified in thinking she’s alone.
If you do spy, she can likely sue for “intrusion upon seclusion.” There are limits: The conduct must be highly offensive to a reasonable person, and courts often dismiss cases where the plaintiff can’t show any real harm. Still, if you want to see your neighbor naked, the safest technology remains your imagination.
Can I use a drone to deliver a cup of coffee?
No—at least not yet. The Federal Aviation Administration, policeman to the nation’s skies, prohibits most commercial use of drones. Hobbyists can fly them outside of populated areas, provided the drones stay within sight and below 400 feet. But delivering a product for compensation is not allowed.
The good news for drone (or latté) enthusiasts is that Congress recently required the FAA to reexamine its policy, under a new law that demands a “comprehensive plan” to allow private-sector drones by fall 2015. Still, technical hurdles cast doubt on whether airborne baristas are the most likely application. The smart money is on robotic paparazzi.
Could a police drone look in my windows for drugs?
Maybe. The law generally doesn’t recognize privacy rights regarding anything that cops can spy from a public vantage. Officers in a helicopter can already look into your backyard without a warrant.
That said, the courts often treat the interior of a home as off-limits. For example, the Supreme Court has rejected the use of thermal-imaging devices to search for indoor grow lights (often used in marijuana cultivation), for fear that the officers might discover “intimate details” such as the time when “the lady of the house takes her daily sauna and bath.”
In that case, the court thought it important that thermal imaging was not in “general public use.” But as drones become common, courts may say we should draw our curtains if we want to maintain an expectation of privacy.
Could the police follow my car with a drone?
Yes, but if it follows you long enough, the police might need a warrant. Generally speaking, officers can follow a vehicle without getting the courts involved—for instance, by driving behind it. But in a recent Supreme Court case, on whether officers need a warrant to affix a GPS device to a car for a month, a majority of justices expressed concern about the length of time. Ultimately, the Court decided the case on a different ground, holding that a warrant was required to attach any object to a car. But police drones might prompt them to revisit just how much public surveillance is too much...
After quietly testing Predator drones over the Bahamas for more than 18 months, the Department of Homeland Security plans to expand the unmanned surveillance flights into the Caribbean and the Gulf of Mexico to fight drug smuggling, according to U.S. officials.
The move would dramatically increase U.S. drone flights in the Western Hemisphere, more than doubling the number of square miles now covered by the department's fleet of nine surveillance drones, which are used primarily on the northern and southwestern U.S. borders.
But the high-tech aircraft have had limited success spotting drug runners in the open ocean. The drones have largely failed to impress veteran military, Coast Guard and Drug Enforcement Agency officers charged with finding and boarding speedboats, fishing vessels and makeshift submarines ferrying tons of cocaine and marijuana to America's coasts.
"The question is: Will they be effective? We have no systematic evidence on how effective they are," said Bruce Bagley, who studies U.S. counter-narcotics efforts at the University of Miami in Coral Gables, Fla.
Despite that, a new control station will arrive this month in Corpus Christi, Texas, allowing Predators based there to cover more of the Gulf of Mexico. An additional drone will be delivered this year to the U.S. Customs and Border Protection's base in Cocoa Beach, Fla., for operations in the Caribbean.
The Federal Aviation Administration has already approved a flight path for the drones to fly more than 1,000 miles to the Mona Passage, the strait between Puerto Rico and the Dominican Republic.
"There is a lot more going on in the deep Caribbean, and we would like to know more," said a law enforcement official familiar with the program who was not authorized to speak publicly. The official said drones may be based temporarily at airfields in the Dominican Republic and Puerto Rico.
The Predator B is best known as the drone used by the CIA to find and kill Al Qaeda terrorists in Pakistan and Yemen. An unarmed version patrols the U.S. borders searching known overland smuggling routes.
On the ocean, however, there are no rutted trails or roads to follow. And the Predator cannot cover as much open water as larger, higher-flying surveillance aircraft, such as the Global Hawk.
"I'm not sure just because it's a UAV [unmanned aerial vehicle] that it will solve and fit in our problem set," the top military officer for the region, Air Force Gen. Douglas M. Fraser, said recently.
Fraser's command contributes ships and manned surveillance airplanes to the Joint Interagency Task Force South. Last year, the task force worked with U.S. agencies and other countries to seize 119 metric tons of cocaine, valued at $2.35 billion.
For the recent counter-narcotics flights over the Bahamas, border agents deployed a maritime variant of the Predator B called a Guardian with a SeaVue radar system that can scan large sections of open ocean. Drug agents can check a ship's unique radio pulse in databases to identify the boat and owner.
The planned drone flights are partly a response to demands from leaders in the western Caribbean to shift more drug agents, surveillance aircraft and ships into the area, as cartels have switched from the closely watched U.S.-Mexico border to seaborne routes. In the last four years, drug seizures in the Caribbean and the Gulf of Mexico have increased 36%, according to the Department of Homeland Security.
"As we tighten the land borders, it squishes out to the seas," said the law enforcement official...
The expansion of military drone technology for surveillance and other domestic uses has the potential to create thousands of new jobs, but it remains unclear what privacy safeguards will be put in place or how usage will be restricted to protect citizens.
“This isn’t the Wild West and we have to have some sort of rules governing how those drones are going to be used,” said Mike Brickner, a spokesman with the American Civil Liberties Union of Ohio. “We don’t want drones to be flying everywhere, deployed and taking pictures of Americans walking down the street not really doing anything.”
Advocates say such concerns are unfounded, and communities will set rules for deploying unmanned aerial vehicles, just as police have rules for pulling a gun or a Taser on a suspect.
“It’s like anything: reasonableness and common sense is going to prevail,” said Michael K. Farrell, president of the Ohio chapter of the Association of Unmanned Vehicle Systems International.
The debate over regulating a possibly booming industry comes as Congress pushes to integrate remotely piloted vehicles into manned airspace by 2015. The Federal Aviation Administration has projected a fleet of 10,000 small UAVs within five years and up to 30,000 within two decades.
Ohio has a huge stake in the outcome. Southwest Ohio is in competition for one of six sites nationally to test UAVs, which could boost a growing industry in the Miami Valley. The region ranks as a hotbed for UAV development with Wright-Patterson Air Force Base and university research centers; aerospace, advanced materials and information technology defense contractors; and educational training, such as at Sinclair Community College.
“We are the natural center for this,” said Deb Norris, the college’s vice president of workforce development and corporate services...
From Cape Canaveral, a 66-foot wingspan, remotely piloted U.S. Customs and Border Protection aircraft takes off in search of drug traffickers, illegal immigrants and terrorists from heights of up to 50,000 feet.
On Lake Okeechobee, researchers hurl a custom-built, 9-foot wingspan plane from an airboat to launch an automated, low-altitude flight to monitor invasive plants.
From large to small, the number of such unmanned aircraft systems — popularly called "drones" — is expected to surge as the federal government works to open civilian airspace to them by 2015. Florida officials hope to position the state as a hub for this fast-growing industry by becoming a test site.
"The skies over Florida will look dramatically different in the years to come," Space Florida President Frank DiBello told a gathering of aerospace professionals this month.
The agency's board recently approved spending up to $1.4 million to try to win designation as one of six test ranges across the country that Congress has directed the Federal Aviation Administration to name by the end of the year.
The test sites hope to show that unmanned systems of all shapes and sizes — from wingspans of inches to more than 240 feet — can fly safely alongside piloted aircraft in different terrain and weather conditions.
As drones proliferate, privacy advocates fear unchecked spying by thousands of airborne vehicles...
Unmanned Aerial Vehicles (UAVs), or drones, are unmanned aircraft that are either controlled by ‘pilots’ from the ground or that fly autonomously, following a pre-programmed mission. The ‘drone’ nickname comes from the constant buzzing noise that some drones make in flight. There are many different types of military drones, but they fall into two main categories: those that are used for reconnaissance, surveillance and intelligence purposes (ISTAR in the military jargon)3, and those that are also armed and can be used to launch missiles and bombs. Armed Predator and Reaper drones deployed in Afghanistan by the US and UK are launched from Kandahar airbase and controlled by operators in the Nevada desert some 7,500 miles away. Initially, ground support troops launch the drones. Once they are airborne control is handed over to a crew of three operators, sitting in front of computer screens in specially designed trailers. One person ‘flies’ the drone, another controls and monitors the cameras and sensors which stream images back to the operator’s screens in real time via satellite,4 while a third person is in contact with the “customers”, ground troops and commanders in the war zone. At the touch of a joystick button the operator can fire missiles or drop bombs on targets showing on a computer screen...
Drone Wars UK is the most comprehensive resource on Drones out there.
What if you sat down at your computer to work and it recognized you merely by the way you moved the mouse?
This kind of authentication may not be so far off in the future. The Department of Defense says its Defense Advanced Research Projects Agency is working to create a system that identifies people through a "cognitive fingerprint," rather than using biometric sensors to read fingerprints and other physically identifiable traits.
The Active Authentication program aims to develop new ways of ensuring the person using a computer console is authorized to do so. It is leveraging software to identify unique aspects of a person based on behavioral traits.
The computer user authentication systems being used today require an extra step, such as entering a password or scanning a fingerprint or iris. Passwords inherently have drawbacks, because they can be cracked. The current biometric tools are safer but still take time and interrupt natural workflow.
"What I would like to do is I'd like to move us to a world where you sit down at a console, identify yourself, and you just start working," Richard Guidorizzi, manager of the Beyond Passwords program (part of the Active Authentication effort) said in an article on a Defense Department Website. "The authentication happens in the background -- invisible to you -- while you continue doing your work without interruptions."
The Active Authentication program will have several phases. In the first phase, researchers will look into biometric methods that don't require additional sensors to capture a person's identifying information. The software will be designed to track behavioral patterns and traits to develop a cognitive identity that can be used to authenticate the user. These traits will include how a person moves the mouse or even a person's writing style (as identified by the use of language in emails or documents), DARPA said. It will seek technology that is viable for both small-scale and large-scale deployments during this program phase.
Later phases will focus on integrating new technologies into a software system that can be deployed across the Defense Department to protect desktop and laptop computers. DARPA plans to use open APIs to create a modular system that can leverage other third-party biometrics software and hardware as it is developed in the future.
Drones, those unmanned aerial vehicles used by U.S. forces in Afghanistan and elsewhere, are being expedited for domestic deployment, opening up a frontier of speculation about privacy, the legality of invasive imagery and the militarization of police. Today, Georgia congressman Austin Scott writes about his proposed legislation to restrict drones. We also offer varied opinions about the promise and peril of government and private drones patrolling our borders and, perhaps, your backyard.
By Austin Scott
For the past few years, unmanned aerial vehicles, or “drones” as they are often called, have become a common tool used by our military overseas. Drone technology has been an invaluable resource to our operations in Iraq and Afghanistan.
We’ve also used drone surveillance along our southern border to prevent illegal immigration and combat drug trafficking.
Drones are an attractive tool for the military because of their cost-effectiveness and operator safety, among other things. Because of these benefits, drones also are being more widely considered by local, state and federal law enforcement agencies for domestic use in the United States.
Naturally, this development has stirred debate among many Americans.
The alarm over domestic drone use does not reflect an anti-drone sentiment; rather, it stems from questions about the consequences these relatively quiet unmanned aircraft — some of which are as small as a hummingbird — will have on Americans’ privacy. Therefore, drones could present a risk to the protections against “unreasonable searches and seizures” outlined in the Fourth Amendment to the Constitution.
While highly effective and certainly an attractive alternative to risking American lives, drone technology presents a greater risk of being used in a much more intrusive manner than other aerial surveillance technology.
Drones are capable of staying aloft for days at a time and can be equipped with highly sophisticated camera technology that can collect a constant stream of surveillance footage. With advances in drone technology and without a requirement that federal agencies obtain a warrant, drones could conduct surveillance for days without just cause.
To ensure that drones are not used to violate Americans’ Fourth Amendment rights, I recently introduced HR 5925, Preserving Freedom from Unwarranted Surveillance Act. This bill would require the government to obtain a warrant before conducting drone surveillance in the United States.
Recognizing that there are situations where the use of drones is necessary and appropriate, such as search and rescue, my bill aims to balance privacy concerns and the ability to harness drone technology for legitimate domestic use.
Drones have proved highly effective in emergency situations requiring swift action to prevent imminent danger to life — such as wildfires. For these reasons, my bill has emergency exceptions and also would permit the federal government to use drones to patrol the national borders or when they are needed to prevent a terrorist attack.
Beyond these public safety exceptions, the legislation would require law enforcement to acquire a warrant from a judge. This follows the spirit of current laws regarding surveillance.
This technology is evolving rapidly and will soon reach previously unforeseen capabilities. Therefore, we must be proactive and ensure that our laws address these new technologies and maintain the safeguards enshrined in our Constitution.
Without the ability to foresee where drone technology could lead, it is important that we protect Americans’ civil liberties from the outset. This legislation may not be the last word on the issue, but it is an important start in this effort to ensure that drones are not used to infringe on Americans’ privacy rights.
Austin Scott represents Georgia’s 8th Congressional District.
At a time when we as a society can’t resolve issues about the privacy of data, issues that have been simmering for years, the emergence of facial recognition technology is very troubling. "Facial recognition blows up assumptions that we don't wear our identities on our person; it turns our faces into name tags," Ryan Calo, director of privacy at Stanford's Center for Internet and Society, told the San Francisco Chronicle.
The Constitution protects us from arbitrary arrests, and although police sometimes arbitrarily demand ID, that is only supposed to happen when there is probably cause. What then happens when a surveillance camera captures your face, and that "name tag" ties the image to your identity.
Picture demonstrators in Egypt. If government thugs can identify them by simply using a camera and a computer, would people feel safe protesting in Tahrir Square. Less alarming, but still annoying, is this: Marketers will have the potential to know what store you shop in even if you never take out a credit card; turning your face into the equivalent of a Web site's cookie...
Facebook has hit a potential stumbling block in its efforts to make money from its more than 900 million active users.
The social network has agreed to pay $20 million to settle a lawsuit in California claiming it publicized that some of its users had “liked” certain advertisers but didn't pay the users, or give them a way to opt out.
The so-called “Sponsored Story” feature on Facebook is essentially an advertisement that appears on the site and includes a member’s Facebook page and generally consists of another friend’s name, profile picture and a statement that the person “likes” that advertiser.
The agreement in California could potentially complicate Facebook’s efforts to accelerate advertising revenue, experts say.
Ever since the company went public last month, critics of the website have said it faces challenges when it comes to drawing in revenue because its users are sensitive to Facebook using their personal information to generate money, and because the website’s advertising is not obtrusive enough to grab the attention of users.
“The fact is the advertisements are not irritating users,” said Rob Enderle, an analyst with Enderle Group. “Advertisers are not getting value because Facebook does not want to upset its users.”
Enderle notes that Facebook’s non-traditional forms of advertising -- using features such as “Sponsored Stories,” or allowing advertisers to establish product-focused pages -- are not having the same effect as more established forms of advertising, which by their nature are conspicuous and direct.
Indeed, a Reuters/Ipsos poll earlier this month showed four out of five Facebook users said they have never bought a product or service as a result of advertising or comments on the social network site. And in mid-May General Motors very publicly yanked $10 million in Facebook advertising, saying paid advertising on the site isn’t effective.
Changing the “Sponsored Stories” feature could cost Facebook $103.2 million, economist Fernando Torres’ analysis of the revenue each ad brings to the site estimated. And, according to the lawsuit, Facebook’s Chief Operating Officer Sheryl Sandberg said the value of a “Sponsored Story” advertisement is at least twice and up to three times the value of a standard Facebook ad that doesn’t include a friend endorsement...
Now that we know Facebook is about to get a lot better at recognizing our faces, what can we do about it?
If you’re the sort of person who wants your friendly social media company to get to know you as well as possible, I have good news: You don’t have to do anything at all. Facebook signs you up for facial recognition by default, so all you have to do is sit back and let your friends teach the company’s algorithms exactly how to identify your face in their photos. In fact, there's a good chance this is already happening, since Facebook was using some of Face.com's technology even before the acquisition.
If, on the other hand, you still cling to quaint notions about privacy and anonymity, the news is mixed. There’s no way to stop Facebook from learning what you look like based on the photos in which you’re tagged, and if you haven't already opted out, it may know your mug pretty well already. But you can easily opt out of the feature in which Facebook uses that information to make your name pop up whenever your friends upload a photo of you.
In his Naked Security blog, Graham Cluley of the computer security firm Sophos explains how. His handy guide comes with pictures, but here are the three basic steps:
1. Open your Facebook privacy settings
2. Next to “Timeline and Tagging,” select “Edit Settings.”
3. Next to “Who sees tag suggestions when photos that look like you are uploaded?”, select “No One,” then click “OK.”
You’ll notice that the only choices are “Friends” and “No One.” Surely mindful of the potential blowback, Facebook doesn’t even give you the option to let random strangers identify you based solely on your face—for the time being, anyway. And outside of a few Jeff Jarvis types, it’s hard to imagine a lot of people clamoring for it to be added. (That doesn’t mean Facebook will never do it, of course.)
In his post, Cluley wonders, “If Facebook's facial database is such a great concept, why doesn't the company present its arguments to users as to why they should want to participate in it, and invite them to ‘opt-in’ to being included in the huge collection of faces?”
At a terminal being renovated here at Love Field, contractors are installing 500 high-definition security cameras sharp enough to read an auto license plate or a logo on a shirt.
The cameras, capable of tracking passengers from the parking garage to gates to the tarmac, are a key first step in creating what the airline industry would like to see at airports worldwide: a security apparatus that would scrutinize passengers more thoroughly, but less intrusively, and in faster fashion than now.
It's part of what the International Air Transport Association, or IATA, which represents airlines globally, calls "the checkpoint of the future."
The goal is for fliers to move almost non-stop through security from the curb to the gate, in contrast to repeated security stops and logjams at checkpoints.
After checking their luggage, passengers would identify themselves not with driver's licenses and paper boarding passes, but by scanning fingerprints or irises to prove they have an electronic ticket.
Passengers would walk with their carry-ons through a screening tunnel, where they'd undergo electronic scrutiny — replacing what now happens at as many as three different stops as they're scanned for metal objects, non-metallic items and explosives.
Passengers would no longer have to empty carry-ons of liquids and laptops before putting them on conveyor belts for X-ray scans. They could keep their belts and shoes on. They could avoid a backlog at full-body scanners and a finger swab for explosive residue.
If screeners notice anything suspicious, a passenger would still be pulled aside and possibly patted down. But otherwise, passengers are supposed to reach their gates faster. And machines that accomplish each part of this transformation already exist or are in development.
The changing technology, combined with new screening tactics and changes at airports like the ones under construction here at Love Field, could make the checkpoint of the future a reality in a decade, the airlines say.
"This isn't really science fiction that we're talking about," says Ken Dunlap, IATA's global director of security...
Some U.S. predator drone attacks may constitute war crimes and all such killings can encourage others to flout human rights standards, a U.N. investigator said.
Defending armed drone use by calling them a valid response to the Sept. 11, 2011, terrorist attacks is unjustifiable, Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions, told a U.N. Human Rights Council conference in Geneva, Switzerland, after Russia and China issued a joint statement to the council condemning drone attacks.
The CIA's use of armed drones in places like Pakistan and Yemen began under President George W. Bush but has grown dramatically under President Barack Obama.
New technologies that improve remote drone operators' ability to engage easily in combat in far-flung regions has led to growing diplomatic and non-governmental concerns about civilian casualties and about other countries also acquiring drones, The New York Times reported.
Some countries "find targeted killings immensely attractive," said Heyns, a South African human-rights law professor. "Others may do so in future."
Current drone practices "weaken the rule of law," he said. "Killings may be lawful in an armed conflict [such as Afghanistan], but many targeted killings take place far from areas where it's recognized as being an armed conflict..."
No shit, Sherlock.
A micro-aviary of drones that look—and fly—like ladybugs, dragonflies, and other insects. Since 2008, George Huang, professor of engineering at Wright State University in Dayton, Ohio, has managed to produce a butterfly model with a 5-inch wingspan. “We haven’t done a final version where we declare victory,” Huang says. “I’ll be happy once it’s fly-sized.”
Darpa and the Air Force have already invested in similarly tiny craft, though with no firm time horizon for deployment. Regardless, micro-drones’ potential goes beyond the military. “Police could use them to fly into a drug trafficker’s house,” Huang says. “Or in a nuclear or mining accident, you can send a fly inside to find victims.”
A swarm of five Frisbee-sized drones equipped with Wi-Fi transmitters that form a kind of aerial Napster. As conceived by Liam Young, cofounder of London-based think tank Tomorrow’s Thoughts Today, they can “appear, broadcast their network, then disperse and re-form in another part of the city.”
After a spotty test flight last November (two of the drones crashed into a river), the file-sharing copters are set to take to the sky this summer at a Dublin science festival called Hack the City. Meanwhile, the Pirate Bay has announced that it’s building its own fleet. File-sharing drones, like file-sharing itself, hover in a legal gray zone, but Young’s not shaken by the prospect of prosecution. “We see it as our responsibility to get people talking about this,” he says. Done.
The GoJett, a supersonic drone designed to hit Mach 1.4—over 1,000 miles per hour—while weighing less than a person and costing as little as $50,000. Aerospace engineering professor Ryan Starkey and his students at the University of Colorado in Boulder modified their hobby-grade turbojet engine to include military-grade bells and whistles, like nozzles that narrow to accelerate airflow. They’re also working with NASA to develop foil bearings that ride on cushions of air, allowing the engine to be oil-free. Laboratory tests have confirmed that it’s twice as efficient as any engine its size, and Starkey plans to double the efficiency again before its maiden flight.
Low-speed flight tests begin this fall, then shift to high-speed tests in 2013. If successful, Starkey imagines, the GoJett could be used for civilian applications, like penetrating hurricanes to gather data. And Mach 1.4 is just a start. “We’re working on engine technology that’ll go Mach 2 to 3,” Starkey says. “Our first goal, once this is over, will be going faster.”
The kinds of drones making the headlines daily are the heavily armed CIA and U.S. Army vehicles which routinely strike targets in Pakistan - killing terrorists and innocents alike.
But the real high-tech story of surveillance drones is going on at a much smaller level, as tiny remote controlled vehicles based on insects are already likely being deployed.
Over recent years a range of miniature drones, or micro air vehicles (MAVs), based on the same physics used by flying insects, have been presented to the public.
The fear kicked off in 2007 when reports of bizarre flying objects hovering above anti-war protests sparked accusations that the U.S. government was accused of secretly developing robotic insect spies.
Official denials and suggestions from entomologists that they were actually dragonflies failed to quell speculation, and Tom Ehrhard, a retired Air Force colonel and expert on unmanned aerial craft, told the Daily Telegraph at the time that 'America can be pretty sneaky.'
The following year, the US Air Force unveiled insect-sized spies 'as tiny as bumblebees' that could not be detected and would be able to fly into buildings to 'photograph, record, and even attack insurgents and terrorists.'
Around the same time the Air Force also unveiled what it called 'lethal mini-drones' based on Leonardo da Vinci's blueprints for his Ornithopter flying machine, and claimed they would be ready for roll out by 2015.
That announcement was five years ago and, since the U.S. military is usually pretty cagey about its technological capabilities, it raises the question as to what it is keeping under wraps.
The University of Pennsylvania GRASP Lab recently showed off drones that swarm, a network of 20 nano quadrotors flying in synchronized formations.
The SWARMS goal is to combine swarm technology with bio-inspired drones to operate 'with little or no direct human supervision' in 'dynamic, resource-constrained, adversarial environments.'
However, it is most likely the future of hard-to-detect drone surveillance will mimic nature.
Research suggests that the mechanics of insects can be reverse-engineered to design midget machines to scout battlefields and search for victims trapped in rubble...
In a surprising move, Apple has successfully patented a technology that aims to protect users from data collection by governments, businesses, and cybercriminals.
The patent, released Tuesday by the U.S. Patent and Trademark Office, and uncovered by Patently Apple, outlines a system that clones users identities, then inserts fake details into those clones to create a jumbled mess of information in an attempt to throw off data collectors — a privacy protection process that Apple describes as “polluting electronic profiling.”
“A cloned identity is created for a principal [i.e. a user],” reads the patent. “Areas of interest are assigned to the cloned identity, where a number of the areas of interest are divergent from true interests of the principal. One or more actions are automatically processed in response to the assigned areas of interest. The actions appear to network eavesdroppers to be associated with the principal and not with the cloned identity.”
In short, this is an entirely different tactic for combating online privacy invasion. Rather than try to hide your identity — which is becoming increasingly difficult, as the Web seeps deeper and deeper into our lives — Apple’s idea is to simply hide behind a wall of noise.
“Data collection is not prevented; rather, it is encouraged for the cloned identity and intentionally populated with divergent information that pollutes legitimate information gathered about the principal,” reads the patent.
Sounds good — a little too good.
Of course, there’s no guarantee that Apple will ever release a product that includes this profile pollution technology, but it is certainly encouraging (and not the least bit curious) to see the Cupertino electronics giant diving into these waters.
We must also note that this type of technology seems quite an odd endeavor for Apple. Not entirely, mind you — the company is notorious for protecting its own privacy, after all — but it does appear to come out of nowhere. In fact, such a technology feels out of place coming from any large corporation. As Apple itself notes in the patent: “Individuals, particularly American citizens, have always been suspect of the motivations and actions of their government and ‘Big Business.’”
That we have. And I must admit, the sheer unusualness of this patent leaves me equally suspect and uneasy — it’s just too good to be true...
Odds are that a database on some server somewhere in the world contains your "faceprint": a digital representation of the shapes and spacings that make your mug yours.
It's likely as unique as a fingerprint and probably far more valuable to companies and the government, both of which are investing heavily in technologies to match faces to identities.
There are obviously useful applications, like automatically tagging your buddies in a social-network photo or - on an entirely different scale - recognizing known terrorists at airports. But there are frightening ones as well: allowing authoritarian states to identify peaceful protesters, enabling companies to accrue ever greater insight into private lives or empowering criminals to dig up sensitive information about strangers.
"Facial recognition blows up assumptions that we don't wear our identities on our person; it turns our faces into name tags," said Ryan Calo, director of privacy at Stanford's Center for Internet and Society. "It can be good and helpful, or it can be dangerous."
At a minimum, the technology demands a serious policy debate over the appropriate ground rules for this tool. But, of course, government officials are still grappling with online privacy questions from a decade ago, as private industry and law enforcement happily march ahead.
Just this week, Facebook officially acquired the facial recognition service Face.com, with reports putting the price tag at $55 million to $100 million. The Menlo Park social network has long licensed the technology to allow users to easily tag their friends in photos, but now presumably has greater power to leverage the tool in new ways.
In October, the technology and government publication Nextgov reported the FBI was building a nationwide facial recognition service, beginning with pilot tests this year in Michigan, Washington, Florida and North Carolina. It's one piece of a broader, $1 billion initiative to bulk up the bureau's fingerprint database with other biometric markers, including iris scans and voice recordings.
Facial recognition technology has been around for three decades. But the mobile and social revolutions are rapidly driving the field forward, as digital photos proliferate, cloud computing powers accelerate and software capabilities advance.
The more tagged photos there are of any given person - in different lighting conditions and from different angles - the more accurate the results become. In May, Face.com said it had scanned more than 41 billion photos, which could be combined with Facebook's own massive collection. Last year, the company said it had 100 billion images on file, with users adding more than 100 million tags per day...
There has been a continuously growing movement to implement international identifications and a more centralized system of tracking people. Politicians have stated that measures like face-recognition databases and iris scans make the world a safer place. However recent studies on global policies in place, strongly invalidate the current effectiveness of these precautions. The Electronic Frontier Foundation reveals:
Automatic Face Recognition in Border Control
Biometric data of individuals’ faces has been used since 2007 at various European border checks. Eleven airports in the United Kingdom now have e-passport gates that scan EU travelers’ faces and compare them to measurements of their facial features (i.e. biometrics), stored on a chip in their biometric passports. Although error rates of state-of-the-art facial recognition technologies have been reduced over the past 20 years, these technologies still cannot identify individuals with complete accuracy. In an incident in 2011, the Manchester e-passport gates let through a couple that had mixed up their passports. The UK Border Agency subsequently disabled the Manchester gates and launched an investigation.
Similar e-passport gates have been introduced in Australia and New Zealand. During the early stages of testing in Australia, the technology showed a six to eight percent error rate. Moreover, this technology also misidentified two men who exchanged passports. Nevertheless, the government refused to disclose the final error rates, citing security concerns.
Digital Fingerprint Recognition
U.S. law requires visitors to submit biometrics to a central database in the form of a digital fingerprint when seeking a visa or when entering the country.EU law further requires all passports for 26 countries in the Schengen area (the borderless zone within European countries) to contain digital fingerprint data on a chip.
A German court recently asked the EU Court of Justice for a preliminary ruling on the legality of biometric passports with RFID chips, which are readable from a distance. The German court questioned whether the EU regulation that requires biometric passports in Europe is compatible with Charter of Fundamental Rights of the European Union and the European Convention of Human Rights.
In France, a report last year disclosed the questionable security of biometric passports. It showed that 10 percent of biometric passports were fraudulently obtained for illegal immigrants or people looking for a new identity. Following the issues with respect to biometric passports in the various EU countries, Members of the European Parliament have queried the European Commission about the reliability of these biometric passports.
Iris Scan Identification
In preparation for the UK’s national ID card scheme, the UK government noted that there was little research indicating the reliability of iris scan identification. The government initially relied upon unpublished and unverified results from an airport trial. There were concerns that “hard contact lenses,” “watery eyes and long eyelashes” could prevent accurate scanning. The government then asked the National Physical Laboratory (NPL) to test the technology. The NPL chief research scientist stated in the news that “technologies like iris scanning are accurate enough for the ID cards application but only provided they are implemented properly and one has appropriate fall-back processes to deal with exceptional cases.” But a study has shown that it is difficult to enroll disabled individuals into an iris database. The success of enrollment also significantly varies depending on race and age, suggesting further errors if the technology were implemented. Additional testing of iris scanners has been initiated by the U.S. Department of Homeland Security...