mouthporn.net
#cybernetics – @h4x0r3d on Tumblr
Avatar

through h4x0r3d's eyes

@h4x0r3d / h4x0r3d.tumblr.com

+-------------------------------+ .:[h4x0r3d@Hackerzlair]:. +-------------------------------+ .:[Links]:. > KOPIMI > HACKER EMBLEM > DATALOVE! > CASCADIA > ABOUT.ME #CYBERWHALEWARRIOR #DGR +-------------------------------+
+-------------------------------+
Avatar

Fully Autonomous Weapons Would Increase Danger to Civilians

November 19, 2012

  • The United Kingdom’s Taranis combat aircraft, whose prototype was unveiled in 2010, is designed strike distant targets, “even in another continent.” While the Ministry of Defence has stated that humans will remain in the loop, the Taranis exemplifies the move toward increased autonomy.
  • © 2010 AP Photo
  • © 2012 Russell Christian for Human Rights Watch
  • The South Korean SGR-1 sentry robot, a precursor to a fully autonomous weapon, can detect people in the Demilitarized Zone and, if a human grants the command, fire its weapons. The robot is shown here during a test with a surrendering enemy soldier.
  • © 2007 Getty Images

Related Materials: 

Giving machines the power to decide who lives and dies on the battlefield would take technology too far. Human control of robotic warfare is essential to minimizing civilian deaths and injuries.  

Steve Goose, arms director

(Washington, DC) – Governments should pre-emptively ban fully autonomous weapons because of the danger they pose to civilians in armed conflict, Human Rights Watch said in a report released today. These future weapons, sometimes called “killer robots,” would be able to choose and fire on targets without human intervention.  The 50-page report, “Losing Humanity: The Case Against Killer Robots,” outlines concerns about these fully autonomous weapons, which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians. In addition, the obstacles to holding anyone accountable for harm caused by the weapons would weaken the law’s power to deter future violations. “Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, Arms Division director at Human Rights Watch. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.” “Losing Humanity is the first major publication about fully autonomous weapons by a nongovernmental organization and is based on extensive research into the law, technology, and ethics of these proposed weapons. It is jointly published by Human Rights Watch and the Harvard Law School International Human Rights Clinic. Human Rights Watch and the International Human Rights Clinic called for an international treaty that would absolutely prohibit the development, production, and use of fully autonomous weapons. They also called on individual nations to pass laws and adopt policies as important measures to prevent development, production, and use of such weapons at the domestic level. Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries – including China, Germany, Israel, South Korea, Russia, and the United Kingdom – have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.

“It is essential to stop the development of killer robots before they show up in national arsenals,” Goose said. “As countries become more invested in this technology, it will become harder to persuade them to give it up.” Fully autonomous weapons could not meet the requirements of international humanitarian law, Human Rights Watch and the Harvard clinic said. They would be unable to distinguish adequately between soldiers and civilians on the battlefield or apply the human judgment necessary to evaluate the proportionality of an attack – whether civilian harm outweighs military advantage. These robots would also undermine non-legal checks on the killing of civilians. Fully autonomous weapons could not show human compassion for their victims, and autocrats could abuse them by directing them against their own people. While replacing human troops with machines could save military lives, it could also make going to war easier, which would shift the burden of armed conflict onto civilians. Finally, the use of fully autonomous weapons would create an accountability gap. Trying to hold the commander, programmer, or manufacturer legally responsible for a robot’s actions presents significant challenges. The lack of accountability would undercut the ability to deter violations of international law and to provide victims meaningful retributive justice. While most militaries maintain that for the immediate future humans will retain some oversight over the actions of weaponized robots, the effectiveness of that oversight is questionable, Human Rights Watch and the Harvard clinic said. Moreover, military statements have left the door open to full autonomy in the future. “Action is needed now, before killer robots cross the line from science fiction to feasibility,” Goose said.

Avatar

Augmented-reality eyewear is the next step toward a future in which we never again have an unmediated view of the world

Terminator Vision Orion Pictures

Google announced yesterday that before the end of 2012, you will be able to buy augmented-reality smart eyeglasses from the search giant. The Android-powered glasses will have an onboard camera that monitors in real time what you see as you walk (or, heavens preserve us, drive) down the street. The lenses will then overlay information about people, locations, and whatnot directly into your field of view.

We knew this day was coming, but I certainly didn't suspect it'd be so soon. Never again will you have to wonder Where is the closest Pizza Hut? or What make of car is that? or Don't I know her from somewhere? Ubiquitous smartphones have already given us the ability to swiftly look up information with only a moderate disruption. Smartglasses completely remove the mediating step of pausing to wonder and ponder and research: data is simply there, an inseparable part of your visible world.

Overlay Google Maps onto the real world, and navigation becomes effortless. Overlay reviews and menus onto restaurant storefronts as you pass them; overlay nutritional data onto your plate as you eat; overlay purchasing info if you particularly admire your co-worker's new shoes; overlay translations of foreign signage, breaking news, hilarious kittens romping at your feet.

As smartglasses become popular, the world will start to seem naked and inaccessible without a glossy data layer on everything.As smartglasses become popular, the world will start to seem naked and inaccessible without a glossy data layer on everything. Everyday activities, maneuvering through the physical world, socializing, working, learning, will all be increasingly eased by the use of glasses; increasingly, until these activities start to feel almost impossible without the glasses. Who's going to have patience to laboriously explain facts to a non-data-overlaid person? Give you my business card? Point you in the direction of Fifth Avenue? I don't even remember how to spell my name! Where are your Googles?

Will businesses see the need for physical signs and billboards? Will municipalities bother to maintain physical street signs and traffic signals? Will smartglasses make the university lecturer's blackboard and salesman's PowerPoint obsolete as well?

What comes after that? With everyone wearing glasses (or, at this point in the future, contact lenses or implants), individual appearance becomes as malleable on the street as it is now on the Internet. You can overlay your real body with a digitally altered one, saving money on subtle nose surgery or just completely living life as a furry avatar.

What, though, will it take to get us to that tipping point, when head-up augmented reality suddenly shifts from a novelty to a ubiquity? Wearing cumbersome goggles on your face as you proceed through your day is a bit more of an intrusion than I for one am ready for. Sony's 3DTV goggles are impressive and designed only to be worn in the comfort of your couch, and still I have yet to meet someone who owns a pair. The gear will have to be small and easy to integrate with your basic life processes. Perhaps AR windshields in our cars will become common first, before we put them on our faces.

But, however it comes -- the fully mediated future has begun.

Avatar
reblogged
Avatar
h4x0r3d
 How The 1% and the Machines May Come To Rule Us All
That’s my own sensational title, but it’s very fitting. This article, from the Atlantic and part of a 3 segment excerpt from a new book, is basically the article (and book?) I’ve been waiting 4 years to read! And also very timely with OWS, which is an awesome bonus.
As RCS followers know, I’m a major tech lover. That said, tech innovation also scares the pants off of me. There are for a few reasons, with the terminator-apocalypse low on the list. More realistically, a major concern of mine has been how technology will put people out of work. Because while technology can empower the average citizen (e.g. 3DP), it can also decimate fields of work, even destroying businesses. (Just think of how badly the US Postal Service is doing due to email, or how self-driving cars will effect taxi-drivers and truckers.)
This is a serious problem. A very serious problem, and one which I’ve been thinking about for some time. What will we do when robots can replace us? And don’t think it’s just simple, menial labor. It might almost be the opposite. There’s a lot of research into machines and programs for augmenting (read: decimating) and replacing (you read that right) even highly skilled positions like doctors (e.g. here, here, and here) and, say, stock-traders. Cause sure, there may still be a few positions left for humans even after machines mostly take control (e.g. one security guard monitoring 10 security cameras), but even cutting half of the positions for a field will produce an avalanche of problems. For instance, already now with unemployment at 9% [!], we’re seeing many students dropping out of college b/c the cost of tuition is extremely high and there’s no job security; thus, less incentive to go especially with the risk of heavy debt weighing down on a graduate’s shoulders.
And it’s a bit odd: On the one hand, robotic labor seems to open up the possibility of a more Utopian world where people needn’t work much (or at all?) to live; however, unless some big changes happen to the planet, the more likely scenario is that a few extremely wealthy people will simply own the robots that can do everything. And what will we do when there’s little-to-no work for us to do?
I don’t know. Like I said, it scares the pants off me. This article doesn’t suggest a solution (though it does emphasize the greater need for higher education, and perhaps we should work to include that in the public schooling system). And I’d really like to hear one.
Seriously, if you have any ideas, let me know!
(Hat tip to emergentfutures for the link.)
  • RCS Highlights:
At least since the followers of Ned Ludd smashed mechanized looms in 1811, workers have worried about automation destroying jobs. Economists have reassured them that new jobs would be created even as old ones were eliminated…. However.. There is no economic law that says that everyone, or even most people, automatically benefit from technological progress... [T]echnological progress is not a rising tide that automatically raises all incomes. Even as overall wealth increases, there can be, and usually will be, winners and losers. And the losers are not necessarily some small segment of the labor force like buggy whip manufacturers. In principle, they can be a majority or even 90% or more of the population… If wages can freely adjust… [then] at some point, the equilibrium wages for workers might fall below the level needed for subsistence. A rational human would see no point in taking a job at a wage that low, so the worker would go unemployed and the work would be done by a machine instead… As technology continues to advance in the second half of the chessboard [nice Kurzweil reference - Ari], taking on jobs and tasks that used to belong only to human workers, one can imagine a time in the future when more and more jobs are more cheaply done by machines than humans. And indeed, the wages of unskilled workers have trended downward for over 30 years, at least in the United States. …lower pay only postpones the day of reckoning. Moore’s Law is not a one-time blip but an accelerating exponential trend… We’ll start with skill-biased technical change… A lot of factory automation falls into this category, as routine drudgery is turned over to machines…
It’s clear … that wage divergence accelerated in the digital era. As documented in careful studies..  the increase in the relative demand for skilled labor is closely correlated with advances in technology, particularly digital technologies. Hence, the moniker “skill-biased technical change,” or SBTC…. Ever-greater investments in education, dramatically increasing the average educational level of the American workforce, helped prevent inequality from soaring as technology automated more and more unskilled work… A key aspect of SBTC was not just the skills of those working with computers, but more importantly the broader changes in work organization that were made possible by information technology. The most productive firms reinvented and reorganized.. to get the most from the technology… The second division is between superstars and everyone else. Many industries are winner-take-all or winner-take-most competitions, in which a few individuals get the lion’s share of the rewards… The superstars in each field can now earn much larger rewards than they did in earlier decades. The effects are evident at the top of the income distribution. The top 10% of the wage distribution has done much better than the rest of the labor force, but even within this group there has been growing inequality. Income has grown faster for the top 1% than the rest of the top decile. In turn, the top 0.1% and top 0.01% have seen their income grow even faster. This is not run-of-the-mill skill-biased technical change but rather reflects the unique rewards of superstardom... If technology exists for a single seller to cheaply replicate his or her services, then the top-quality provider can capture most—or all—of the market. The next-best provider might be almost as good yet get only a tiny fraction of the revenue. Technology can convert an ordinary market into one that is characterized by superstars. Before the era of recorded music, the very best singer might have filled a large concert hall but at most would only be able to reach thousands of listeners over the course of a year… Once music could be recorded and distributed at a very low marginal cost, however, a small number of top performers could capture the majority of revenues in every market, from classical music’s Yo-Yo Ma to pop’s Lady Gaga… According to economist Emmanuel Saez, the top 1% of U.S. households got 65% of all the growth in the economy since 2002. In fact, Saez reports that the top 0.01% of households in the United States—that is, the 14,588 families with income above $11,477,000—saw their share of national income double from 3% to 6% between 1995 and 2007… The third division is between capital and labor. Most types of production require both machinery and human labor… If the technology decreases the relative importance of human labor in a particular production process, the owners of capital equipment will be able to capture a bigger share of income from the goods and services produced... According to the recently updated data from the U.S. Commerce Department, recent corporate profits accounted for 23.8% of total domestic corporate income, a record high share that is more than 1 full percentage point above the previous record. Similarly, corporate profits as a share of GDP are at 50-year highs. Meanwhile, compensation to labor in all forms, including wages and benefits, is at a 50-year low. Capital is getting a bigger share of the pie, relative to labor.
Avatar

The day when doctors can patch up the human brain with electronics, cyborg-style, hasn’t dawned just yet. But if the rats at Tel Aviv University are any indication, that day may not be so very far away. Researchers there have developed a synthetic cerebellum that has restored lost brain function in rats, demonstrating that artificial brain analogs can potentially replace parts of the brain that aren’t functioning properly. Paging officer Alex Murphy. The team’s synthetic cerebellum is more or less a simple microchip, but can receive sensory input from the brainstem, interpret that nerve input, and send the appropriate signal to a different region of the brainstem to initiate the appropriate movement. Right now it is only capable of dealing with the most basic stimuli/response sequence, but the very fact that researchers can do such a thing marks a pretty remarkable leap forward. To achieve such a breakthrough, the cerebellum was a pretty ideal place to start. Its architecture is simple enough and one of its functions is to orchestrate motor movements in response to stimuli, making it easy enough to test. Using what they already knew about the way a rat’s cerebellum interacts with its brainstem to generate motion, they built a chip that mimicked that kind of neural processing and activity. Read Full Article

Avatar

from afp: A hair-thin electronic patch that adheres to the skin like a temporary tattoo could transform medical sensing, computer gaming and even spy operations, according to a US study published Thursday. The micro-electronics technology, called an epidermal electronic system (EES), was developed by an international team of researchers from the United States, China and Singapore, and is described in the journal Science. "It's a technology that blurs the distinction between electronics and biology," said co-author John Rogers, a professor in materials science and engineering at the University of Illinois at Urbana-Champaign. "Our goal was to develop an electronic technology that could integrate with the skin in a way that is mechanically and physiologically invisible to the user."

Avatar

  www.tv-robotics.co.za

The Pentagon’s Defense Advanced Research Projects Agency (DARPA) plans to have fully humanoid robots that think, act, react, learn, make decisions all on their own, and live amongst us all, by the year 2025 or even sooner. We are talking only 16? years or less. Robotics is alot more important to the New World Order Agenda than a lot of people may think. The NWO knows that almost none of the police or military are actually going to turn on the citizens of their own country and enforce martial law and a police state. This is why (IMO) that robots are a crucial factor to the success of the NWO. intelligent humanoid robots are exactly what the NWO needs in order to police and enslave everyone.

Duration : 0:1:21

Avatar

Big picture Technologist Raymond Kurzweil has a radical vision for humanity's immortal future

Photo-Illustration by Ryan Schude for TIME

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME's photo-essay "Cyberdyne's Real Robot.")

Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. (Comment on this story.)

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net