Predicting the Future Changes the Future

khan2012

Chrono Cadet
If a time traveler comes from the future and makes statements about future events to come, that time traveler alters and changes the future in an unpredictable way...

http://www.scientificamerican.com/article.cfm?id=merging-of-mind-and-machine

quote:

"
Sometime early in this century the intelligence of machines will exceed that of humans. Within a quarter of a century, machines will exhibit the full range of human intellect, emotions and skills, ranging from musical and other creative aptitudes to physical movement. They will claim to have feelings and, unlike today’s virtual personalities, will be very convincing when they tell us so. By around 2020 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.

"
 
By around 2020 a $1,000 computer will at least match the processing power of the human brain.


Processing power is not the same as intelligence. All these claims of emerging intelligence in machines are really very much hype. There is, for example, even a robot now that can in any photograph 'recognise' people, animals, cars, etc. But is this device intelligent ? It has simply been programmed with preset algorithms that say ' a certain set of pixels in xxx shape is likely to be a human ', or a giraffe, or a truck, or whatever. There's no concious thought involved..it's just a glorified calculator.

Just because a task is complex...doesn't mean that a robot solving it is intelligent. When a pocket calculator works out that 2 + 2 = 4 ....we don't say it is intelligent. The calculator is just doing what it's pre-programmed to do. So why is recognising faces, cars, trees, etc any different ??
 
Processing power is not the same as intelligence. All these claims of emerging intelligence in machines are really very much hype. There is, for example, even a robot now that can in any photograph 'recognise' people, animals, cars, etc. But is this device intelligent ? It has simply been programmed with preset algorithms that say ' a certain set of pixels in xxx shape is likely to be a human ', or a giraffe, or a truck, or whatever. There's no concious thought involved..it's just a glorified calculator.

Just because a task is complex...doesn't mean that a robot solving it is intelligent. When a pocket calculator works out that 2 + 2 = 4 ....we don't say it is intelligent. The calculator is just doing what it's pre-programmed to do. So why is recognising faces, cars, trees, etc any different ??

The human brain is a quantum computer and it is only a matter of ...time, before artifical quantum computers become self aware.


http://www.crnano.org/interview.degaris.htm

quote:

"
Long term, say 50-100 years, I am very, very worried, and I’m predicting the worst war that humanity has ever seen. Specifically I’m concerned over the issue of species dominance. When you look at the technology, I haven’t even mentioned quantum computing, which is exponentially more powerful than classical computing. Once the technical problems are solved, I imagine a whole flood of applications. We’re limited more by our imaginations than anything else.

So when you think of femtosecond switching, trillion trillion bits, self-assembling 3-dimensional circuitry with no heat – all of these fabulous 21st century technologies will force the issue. Humanity will have to decide whether we remain the dominant species or not. Today, this idea is pretty much science fiction to most people, but there are a growing number of people like me who do take the prospect seriously, and it’s a question of when, not if. It will definitely be in this century. Humanity will have to make this enormous decision, whether we build these “artilects” – artificial intellects. These artilects could be godlike. Imagine a self – assembling artilect the size of an asteroid. When you do the math, and analyze the potential capacity of these things compared to the human brain, you realize that these artilects would be literally trillions of times superior to the human brain. If you start taking these numbers seriously, then you start asking serious political questions: imagine that humanity does create these artilects. They would be immortal, they could go anywhere, change their shape, have virtually unlimited memory capacities, have a huge number of sensors. They will be thinking a million times faster than humans. At the moment, this is all science fiction, but the debate is starting to heat up amongst specialists in the field.

"
 
I'd say that rather than "Intelligence," with respect to a machine, one might use the term "simulated intelligence," because the responses are anticipated and programmed in. But there is a real good question here: If intelligence can so be simulated, to what degree does human intelligence exist?

Take the case of Copernicus. Why did Copernicus see the pattern (and he said he got the idea from ancient writings),of the earth traveling around the sun rather than a completely earth-centered view?

Copernicus got an idea: how do you compute an idea? I suppose, though, a machine could have superimposed data on coordinate systems and found a better match than the earth-centered theory--but there goes the intelligence argument.

But are we any more than a " ghost in a machine" capable of altering the output of the machine somewhat? It would have to be a ghost with a limited memory of its past, though.

But there does seem to be a circumstance in which: If we can clearly understand something, we can find a way to simulate it mechanically.

I just noticed that your avatar got a haircut !
 
Processing power is not the same as intelligence. All these claims of emerging intelligence in machines are really very much hype. There is, for example, even a robot now that can in any photograph 'recognise' people, animals, cars, etc. But is this device intelligent ? It has simply been programmed with preset algorithms that say ' a certain set of pixels in xxx shape is likely to be a human ', or a giraffe, or a truck, or whatever. There's no concious thought involved..it's just a glorified calculator.

Just because a task is complex...doesn't mean that a robot solving it is intelligent. When a pocket calculator works out that 2 + 2 = 4 ....we don't say it is intelligent. The calculator is just doing what it's pre-programmed to do. So why is recognising faces, cars, trees, etc any different ??

Well, I feel I need to step in and at least add something to the "intelligent computers" side of this discussion, mostly because of my experience in these areas. First, I think we all need to agree that (despite popular press portraying it this way), intelligence is not a boolean function where you point to one entity and claim it is not intelligent, but then some higher level is. No, intelligence is a continuum just like any other information-derived activity. So the problem is really more of one associated with developing metrics for deciding when some set of software (and hardware!) have met the intelligence level of, say, a cockroach, or a dog.

However, with that being said, I should point out that the ability to reason (about oneself or one's environment) is clearly a sign of intelligence. And there has been quite a bit of work with respect to reasoning systems, both in the white world and in the dark world (which I obviously cannot discuss the latter). But in the former, there is a powerful technique called "model-based reasoning" which is being used in many system development areas.

Basically, the same systems engineering ontological models that we develop to help guide a complex system's development can be made available to the "plant software system" itself. This amounts to a large network of relationships that prescribe three major domains of any system:

Operational Domain - How (and for what purpose) a system is used. Interestingly enough, these models are purely time-domain based!
Functional Domain - What transforms a system performs on input energy/information.
Physical Domain - What physical components and interfaces comprise the system and its energy/information inputs.

By providing a system this ontological model, in a relational database form, along with heuristic software to traverse it, systems can actually reason, learn, and make informed decisions about themselves in-situ (i.e. non pre-programmed). The primary application of this technology in the white world is in what is called "model-based reasoning for health management of a system." It works exceedingly well, and we have seen systems come up with novel (ones not predicted by a human) solutions to situations they have been presented with.

RMT
 
By providing a system this ontological model, in a relational database form, along with heuristic software to traverse it, systems can actually reason, learn, and make informed decisions about themselves in-situ (i.e. non pre-programmed). The primary application of this technology in the white world is in what is called "model-based reasoning for health management of a system." It works exceedingly well, and we have seen systems come up with novel (ones not predicted by a human) solutions to situations they have been presented with.



I'm sure you are no doubt familiar with John Searle's 'Chinese Room' variant of the Turing Test. It is a good analogy of the notion that the computer does not itself 'understand' what it is doing but is simply going through the motions.

That is the fundamental problem with artificial intelligence. A computer can beat the world chess champion at chess. But only because it can calculate all the possible permutations of moves a good deal faster than the chess player. The computer does not understand that it is playing chess....it is simply manipulating electrons in its processor in accordance with preset algorithms.

Those electrons don't 'mean' anything to it. And in my view, it is 'meaning' ( omg....the scientists pet hate...teleology ) that is the core criteria for intelligence.

Perhaps the best example I can think of is sitting right in front of me. This is my own 'Turing Test'...and I think maybe better than the original...

My monitor has converted all those processor electrons into several million coloured pixels. Now....does a pixel at the top left of the screen 'understand' that it is in some way 'connected' with one at the bottom right ? As far as I can see, each pixel has no idea that it is part of anything. It requires an observer to give those several million pixels some 'meaning'. Consider then, a black pixel on the screen. This is no output. Zero electrons. How does that pixel know if this is because the monitor is off, or it is actually part of an image of a totally dark room??

I seriously wonder how a computer could 'understand' such a concept.
 
and we have seen systems come up with novel (ones not predicted by a human) solutions to situations they have been presented with.


Somehow I don't buy this line of reasoning for intelligence. A top class chess program can beat the world chess champion. Is that because the computer had greater 'reasoning' power ? No...the computer is simply faster at processing the information, and the world chess champion cannot calculate 100 billion ( or whatever ) permutations a second so he misses a move that the computer readily sees. Clearly...given more time the world chess champion WOULD have spotted the move...and it is no different with any system you care to mention. So I would not gauge the ability of a system to ( in the short term ) come up with unforseen solutions as a measure of either greater reasoning or intelligence.
 
I seriously wonder how a computer could 'understand' such a concept.

One would first have to define a technical definition of what it is to understand. Therein lies one of the problems of assessing intelligence via understanding.

I'm sure you are no doubt familiar with John Searle's 'Chinese Room' variant of the Turing Test. It is a good analogy of the notion that the computer does not itself 'understand' what it is doing but is simply going through the motions.

The flipside becomes problematic, of course, because you would deem a human being to be intelligent from a "black box" perspective. In other words, you already know the means by which a computer computes (that is the "white box" approach). Thus, even if a computer could pass the Turing test your arugment falls apart when we realize we do not currently understand how the human machine arrives at "understanding" (whatever you define it to be).

You simply cannot deny intelligence only because you know the mechanism from which it arises. It is a biased approach. Through such an approach, nothing will ever be deemed "intelligent" outside of a human, and the ironic thing is we can't describe how the human machine manifests it! No, I am afraid the only viable means for assessing intelligence is from a "black box" approach, which is why the Turing Test still remains the best test we know of...because it does not care, nor evaluate, the technology which exhibits "intelligence". That is why I think we need a continuum-based metric.

RMT
 
K this is interesting and all but what does this have to do with time travel? the human brain cannot look into the future to predict the future to even risk changing the outcome in the first place. So that's that. All this talk on intelligence, memory blah blah blah has nothing to do with time.

Shouldn't this subject be elsewhere?
 
You simply cannot deny intelligence only because you know the mechanism from which it arises. It is a biased approach. Through such an approach, nothing will ever be deemed "intelligent" outside of a human, and the ironic thing is we can't describe how the human machine manifests it! No, I am afraid the only viable means for assessing intelligence is from a "black box" approach, which is why the Turing Test still remains the best test we know of...because it does not care, nor evaluate, the technology which exhibits "intelligence". That is why I think we need a continuum-based metric.


I'd suspect that what we regard as 'intelligence' is an emergent phenomenon, and thus a simple metric would not work. I have little doubt that true intelligence goes way beyond just problem solving. When a machine starts to ponder it's own existence, which requires consciousness, then one would begin to get there. But there again, in theory one could program a machine to ACT as if it were self-conscious.

This is the famous 'Zombie Paradox'.....how on earth does one prove that even one's fellow human being is conscious ( let alone a machine ) ? Would it be possible to manufacture a creature ( something akin to Data in Star Trek ) that looked and acted just like a conscious being but which was totally unconscious of it's actions ? I suspect so, for I doubt if the emergent nature of intelligence and consciousness is just within behaviour but is also hard wired into the physical structure.

I would argue that the 'Zombie'....even if it acted just like a human and even got up and went to work in the morning....is still just going through the pre-programmed motions.
 
I'd suspect that what we regard as 'intelligence' is an emergent phenomenon, and thus a simple metric would not work.

I suspect you are right! /ttiforum/images/graemlins/smile.gif Doesn't have to be a simple metric. But there needs to be a metric (or set of them) nonetheless. Otherwise this argument continues out of subjectivity.

I have little doubt that true intelligence goes way beyond just problem solving.

Define "true intelligence". Same problem. Do you at least agree that intelligence is a continuum, and not a threshold?

This is the famous 'Zombie Paradox'.....how on earth does one prove that even one's fellow human being is conscious ( let alone a machine ) ?

Agree totally. You see, we PRESUME that our fellow human is intelligent for no other reason than they appear to be the same basic organism that we are. That is why my argument is that you will ALWAYS claim any machine that you understand the basics of how it performs its processing is not intelligent. Yet we do not know how a human does its thing, we simply presume its intelligence. It was the very reason that Turing saw fit to prevent the tester from having any knowledge that s/he was really conversing with a computer. That bias immediately presumes a lack of intelligence that cannot be dismissed.

For all we know, every single thought we have and action we take could be part of a set of "programming" so large that we cannot conceive of its complexity. We are extremely non-linear machines. And it is only in the last 10-15 years that we have purposefully tried creating computers that directly implement non-linear behaviors. My whole profession of control systems has always been so interested in creating linear responses in control laws, because they are easy to analyze and easier to predict for purposes of verification. Yet, we have known for at least the last 2 decades that non-linearity is how you eek our more performance, more bandwidth from any system.

I think we are closer than you think! /ttiforum/images/graemlins/smile.gif

RMT
 
For all we know, every single thought we have and action we take could be part of a set of "programming" so large that we cannot conceive of its complexity. We are extremely non-linear machines.


I've seen a number of articles and documentaries which indicate that...horror of horrors ( from the conventional science perspective)....emotion may be the key to intelligence. And it sort of makes sense. Mr Spock would have an extremely hard time making up his mind what he wanted to do, based solely on logic. A logical choice would have to consider all possible outcomes, and have some possibly arbitrary cut off factor for eliminating too high a range of decisions. Will Mr Spock holiday in Lanzaroti or Barbados ? Hmm....how on Earth ( or Vulcan ) would he decide logically ?

That irrational thing called emotion allows one to cut through a lot of decision making process and come up with a simple 'because I like it !'. Evolution-wise...this makes a lot of sense. Abandon pure logic and introduce irrationality to cut the processing down. Of course, there's no guarantee that this won't lead to beings who 'like' to base jump without a parachute......but evolution soon takes care of them....and generally what people like is what's good for them ( says Twighlight puffing on his cigarette ).
 
I am detecting a tendency for you to ignore a question. I hope that is not the case, for that would be as rude in a debate as things you have pointed out about others. So let me put it to you again, with the hopes of getting an answer:

<font color="red"> Do you at least agree that intelligence is a continuum, and not a threshold?[/COLOR]

Please and thanks,
RMT
 
For all we know, every single thought we have and action we take could be part of a set of "programming" so large that we cannot conceive of its complexity.

For many years I have explored the world of ideas. Simply from experience I have learned that an idea that springs to mind clearly and seems to be an epiphany is not my own. That's the time to turn to a search engine and check it out. I have always found that "my" idea had a long history of antecedents. So I would venture to state that <font color="purple"> self-evident ideas are someone else's thoughts. [/COLOR] /ttiforum/images/graemlins/smile.gif
 
Would it still be you? Or would the true you ...be dead?


http://india.smashits.com/wikipedia/Posthuman_future

quote:

"

Steven Pinker, a cognitive neuroscientist and author of How the Mind Works, poses the following hypothetical, which is an example of the Ship of Theseus paradox:


Surgeons replace one of your neurons with a microchip that duplicates its input-output functions. You feel and behave exactly as before. Then they replace a second one, and a third one, and so on, until more and more of your brain becomes silicon. Since each microchip does exactly what the neuron did, your behavior and memory never change. Do you even notice the difference? Does it feel like dying? Is some other conscious entity moving in with you?

"
 
I am detecting a tendency for you to ignore a question. I hope that is not the case, for that would be as rude in a debate as things you have pointed out about others. So let me put it to you again, with the hopes of getting an answer:

Do you at least agree that intelligence is a continuum, and not a threshold?


Hmm, my apologies.....but I would have thought that the following statement that I made earlier answered that question :-


I'd suspect that what we regard as 'intelligence' is an emergent phenomenon, and thus a simple metric would not work.


Clearly, as an emergent phenomemon, I would have to regard it as a threshold. My definition, as I stated in later posts, is that a truly intelligent machine would be capable not just of processing information...but be aware that it was processing information. In other words, intelligence requires consciousness.
 
According to Rudolf Steiner--an old time esoteric guy--consciousness is followed by self-conscious consciousness, which is what I think you're talking about. But he has that followed by consciousness self-conscious consciousness. I'm not to sure what that would be.

Steiner also predicted, in future evolution, human sex organs located in the throat /ttiforum/images/graemlins/confused.gif This was fifty years or so before "Deep Throat." :D
 
Processing power is not the same as intelligence. All these claims of emerging intelligence in machines are really very much hype. There is, for example, even a robot now that can in any photograph 'recognise' people, animals, cars, etc. But is this device intelligent ? It has simply been programmed with preset algorithms that say ' a certain set of pixels in xxx shape is likely to be a human ', or a giraffe, or a truck, or whatever. There's no concious thought involved..it's just a glorified calculator.
Can you prove that the human brain is not just a glorified calculator?
 
Can you prove that the human brain is not just a glorified calculator?


It is just that. But as it's the human brain that also decides what intelligence is....then by definition it has to be some sort of a benchmark of the matter.
 
Back
Top