AB,
My so called "assertions" about the limitations of man made computer software are based upon a working lifetime of programming computers, which began in 1968 and continues to this day.
Whoosh! Your assertions aren’t “so called”, they’re just
assertions. Specifically you assert that it’s “physically impossible” for a computing device ever to be self-aware. You rely for this assertion though on your knowledge of machines that are hugely simpler than brains – and yet for reasons you never feel like sharing you somehow jump from that knowledge to a statement of un-argued and un-evidenced certainty about what could never be.
Is there any point in asking you yet again to explain
why you think that no computing device – even one as astonishingly complex as a human brain – could be self-aware, or should we assume that that’s just another of your statements of personal faith?
I have always had a fascination of programming and have never just looked upon it as a job. I pride myself in building fully automated systems which require little or no manual input. My current system involves four servers working in parallel, pulling information from several thousand web pages every day, then processing and formatting this information which then gets passed on to my business colleague who then sells it to our clients. And all I have to do (apart from checking that it is all working) is to ensure that the servers are all switched on and logged in.
That’s nice. And how complex would you say that system is compared with a human brain with around 100 billion neurons and roughly 100 trillion synapses? Your set up will do calculations more quickly than any human can, and will store more data than any human can. What it can’t do though - at least not yet – is come anywhere near close to the parallel processing complexity of a brain. And yet somehow you extrapolate from that a “never”?
How so? How could you possibly know what artificial intelligence will be capable if in ten years, 30 years, 100 years etc as it gets more complex, and as we continue to understand better and better the architecture of brain processing?
Just say, “that’s an article of faith I happen to have” if you want to but assertions aren’t facts no matter how often you make them.
You are quite correct in observing that software can be used to produce the emergent properties you mention. But this emergence is entirely intended by the programmer, even though the results may be unexpected. The root causation is always in the mind and implementation of the programmer.
That’s fundamentally wrong – all sots of unexpected outcomes arise when “computers” interact with each other. Imagine you could design a computer that mimicked precisely the functionality of an ant. And then that you made lots of them. Each “ant” would be programmed to follow certain simple rules (“lay down a strong pheromone trail if you find food”, “follow a strong pheromone trail if your role is to forage for food” etc) but that’s all. Guess what though? Yup, you’d end up nonetheless with ant “societies” that built complex structures aligned to prevailing weather conditions, farmed, evicted rival colonies and did all sorts of things that not one part of their programming intended.
Is any of this sinking in yet?