Question#1 eliminate computers because we are so dependent on

Question#1 In my opinion the most interesting fact is why and how humans should control superintelligence. Humans are the most intelligent living creatures on earth, and their actions will affect the life of the other creatures (animals etc.) more. If a machine is built that is more intelligent and dominating than humans, then their action will affect the lives of humans to a great extent. For example, superintelligence may have values that do not align with the survival of human beings. If an artificial superintelligence does become goal-driven, it might develop goals incompatible with human well-being. Or it may pursue compatible goals via incompatible means. Hence the destiny of the human beings will depend on the wish of the super intelligent machines. Being far more powerful and intelligent, humans will be of no match to these machines. As a super intelligent entity becomes more and more super intelligent, it will have more and more awareness of its own mental processes. With increased self-reflection it will become more and more autonomous and less able to be controlled. Like humans, it will have to be persuaded to believe in something (or to take a certain course of action). Also, this super intelligent entity will be designing even more self-aware versions of itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don’t persuade humans because monkeys lack the ability to refer to the concepts that humans are able to entertain. To a super intelligent entity we will be as persuasive as monkeys (and probably much less persuasive). I imagine two (non-exclusive) scenarios in which autonomous, self-replicating AI entities could arise and threaten their human creators. 1. The Robotic Warfare scenario: No one wants their (human) soldiers to die on the battlefield. A population of intelligent robots that are designed to kill humans will solve this problem. Unfortunately, if control over such warrior robots is ever lost, then this could spell disaster for humanity. 2. The Increased Dependency scenario: Even if we wanted to, it is already impossible to eliminate computers because we are so dependent on them. Without computers our financial, transportation, communication and manufacturing services would grind to a halt. Imagine a near-future society in which robots perform most of the services now performed by humans and in which the design and manufacture of robots are handled also by robots. Assume that, at some point, a new design results in robots that no longer obey their human masters. The humans decide to shut off power to the robotic factory but it turns out that the hydroelectric plant (that supplies it with power) is run by robots made at that same factory. So now the humans decide to halt all trucks that deliver materials to the factory, but it turns out that those trucks are driven by robots, and so on. If developed completely, AI is a double-edged sword. Both could solve the complex issues in which humanity is put, as exterminate this same humanity for one simple reason: the human being would be redundant for a super intelligent AI. So it is crucial the problem of thinking in advance how to control this AI and induce her to do what we want. But what we want may not be the best for us, and then everything is complicated. Thus the subject, let extends indefinitely. It’s radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, when super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance. Once machines surpass us in intelligence and progressively become even more intelligent, we will have lost our ability to control what happens next. Before this comes to pass, it is essential that we develop a strategy to influence what happens so that the potential dangers are dealt with before they develop. There’s a story that scientists built an intelligent computer. The first question they asked it was, “Is there a God?” The computer replied, “There is now.” Wise development would ensure that we reap the benefits and minimize the risks. In short the final goal of AI development should be that we end up with a “Friendly” superintelligence rather than an unfriendly or indifferent superintelligence. Question#2 In my opinion the prospect of achieving the type of superintelligence discussed in the book, although might be possible, is a bit farfetched. If we realistically think then I would say no such superintelligence will exist. I think human advancement will co-exist with technological advancement, with humans capabilities will be enhanced by synthetic biology and artificial intelligence. The emergence of super-intelligence (machines replacing humans) is far from a forgone conclusion, especially within the time perspective of a generation or two that is predicted. The basic assumption that anything ordinary intelligence can do, an improved intelligence is capable of. In particular if an ordinary intelligence is capable of inventing an intelligence superior to itself, the same must be true for superintelligence. In this way we get an infinite reiterative process and geometric, or as we prefer to say nowadays, exponential growth. Now this ability of exceeding yourself is a highly abstract one, reminiscent of the kind of reasoning that leads paradox of omnipotency – can God make a stone so heavy he cannot lift it? A small animal such as a rat can carry a bigger animal on its back, but this cannot be assumed recursively, an elephant put on top of an elephant will break the back of the latter. A thin paper can easily be folded, but the process soon comes to a stop, long before the thickness of the paper exceeds its length and breadth. Examples can be multiplied, but on the other hand as the notion of intelligence is such a fluid one, any attempts to foil its growth, can easily be circumvented. The problem is now how to tame this power so it does not lead to the extinction of mankind. How to make this power benevolent? This is exactly the task of creating a deity. God, as far as the notion makes sense, looks out for the interests of mankind far more effectively than mankind would be able to do on its own. As of now, today, no one knows with certainty to what extent (if any) superintelligence will eventually be able to do everything that human intellect can do and do it better and faster. Humans design systems and are beginning to design systems that can also design systems. I have a few articles of faith that I presume to share now. First, I believe that instruments of artificial intelligence (AI) will never replace human beings but, over time, they will become increasingly more valuable collaborators insofar as the what’s and how’s are concerned. Second, I believe that human beings will always be much better qualified to rank priorities and determine the whys. Finally, and of greatest importance to me, I believe that only human beings possess a soul that can be nourished by “a compassionate and jubilant use of humanity’s cosmic endowment.” In the conclusion, I believe that no real superintelligence, depicted in the book, will exist in future. There will be superintelligence but its magnitude will be far less than what is explained in the book. If machines will become more intelligent then human’s brain percentage use will increase as well. Smarter machines and more intelligent human brains will co-exist and human will be making final calls. WC: 7533